[openstack-dev] [octavia] Some tips about amphora driver

2018-07-05 Thread Jeff Yang
Recently, my team plans to provider load balancing services with octavia.I
recorded some of the needs and suggestions of our team members.The
following suggestions about amphora may be very useful.

[1] User can specify image and flavor for amphora.
[2] Enable multi processes(version<1.8) or multi threads(version>=1.8) for
haproxy
[3] Provider a script to check and clean up bad loadbalancer and amphora.
Moreover we alse need to clean up neutron and nova resources about these
loadblancer and amphora.

The implementation of [1] and [2] depend on provider flavor framework. So
it's time to implement provider flavor framework.
About [3], We can't delete loadbalancer by API if the loadbalancer's status
is PENDING_UPDATE or PENDING_CREATE. And we haven't api for delete amphora,
so if the status of this amphora is not active it will always exists. So
the script is necessary.
https://storyboard.openstack.org/#!/story/2002896
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [nova][api] Novaclient redirect endpoint https into http

2018-07-05 Thread Ghanshyam Mann



  On Fri, 06 Jul 2018 11:30:15 +0900 Alex Xu  wrote  
 > 
 > 
 > 2018-07-06 10:03 GMT+08:00 Alex Xu :
 > 
 > 
 > 2018-07-06 2:55 GMT+08:00 melanie witt :
 > +openstack-dev@
 >  
 >  On Wed, 4 Jul 2018 14:50:26 +, Bogdan Katynski wrote:
 >   But, I can not use nova command, endpoint nova have been redirected from 
 > https to http. Here:http://prntscr.com/k2e8s6  (command: nova –insecure 
 > service list)
 >   First of all, it seems that the nova client is hitting /v2.1 instead of 
 > /v2.1/ URI and this seems to be triggering the redirect.
 >  
 >  Since openstack CLI works, I presume it must be using the correct URL and 
 > hence it’s not getting redirected.
 >  
 > And this is error log: Unable to establish connection 
 > tohttp://192.168.30.70:8774/v2.1/: ('Connection aborted.', 
 > BadStatusLine("''",))
 >
 >   Looks to me that nova-api does a redirect to an absolute URL. I suspect 
 > SSL is terminated on the HAProxy and nova-api itself is configured without 
 > SSL so it redirects to an http URL.
 >  
 >  In my opinion, nova would be more load-balancer friendly if it used a 
 > relative URI in the redirect but that’s outside of the scope of this 
 > question and since I don’t know the context behind choosing the absolute 
 > URL, I could be wrong on that.
 >   
 >  Thanks for mentioning this. We do have a bug open in python-novaclient 
 > around a similar issue [1]. I've added comments based on this thread and 
 > will consult with the API subteam to see if there's something we can do 
 > about this in nova-api.

We can support both URL for version API in that case ( /v2.1 and /v2.1/ ). 
Redirect from relative to obsolete can be changed to  map '' to 'GET': 
[version_controller, 'show'] route, something like [1]. 

[1] https://review.openstack.org/#/c/580544/

-gmann

 >  
 > 
 > Emm...check with the RFC, it said the value of Location header is absolute 
 > URL https://tools.ietf.org/html/rfc2616.html#section-14.30
 > Sorry, correct that. the RFC7231 updated that. The relativeURL is ok. 
 > https://tools.ietf.org/html/rfc7231#section-7.1.2   -melanie
 >  
 >  [1] https://bugs.launchpad.net/python-novaclient/+bug/1776928
 >  
 >  
 >  
 >  
 >  __
 >  OpenStack Development Mailing List (not for usage questions)
 >  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 >  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 >  
 >  
 >  __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [masakari] Introspective Instance Monitoring through QEMU Guest Agent

2018-07-05 Thread Kwan, Louie
Thanks Tushar Patil for the +1 for 547118.

In regards of the following review:
https://review.openstack.org/#/c/534958/

Any more comment?

Thanks.
Louie

From: Tushar Patil (Code Review) [rev...@openstack.org]
Sent: Tuesday, July 03, 2018 8:48 PM
To: Kwan, Louie
Cc: Tim Bell; zhangyanying; Waines, Greg; Li Yingjun; wangqiang-bj; Tushar 
Patil; Ken Young; NTT system-fault-ci masakari-integration-ci; wangqiang; 
Abhishek Kekane; takahara.kengo; Rikimaru Honjo; Adam Spiers; Sampath 
Priyankara (samP); Dinesh Bhor
Subject: Change in openstack/masakari[master]: Introspective Instance 
Monitoring through QEMU Guest Agent

Tushar Patil has posted comments on this change. ( 
https://review.openstack.org/547118 )

Change subject: Introspective Instance Monitoring through QEMU Guest Agent
..


Patch Set 3: Workflow+1

--
To view, visit https://review.openstack.org/547118
To unsubscribe, visit https://review.openstack.org/settings

Gerrit-MessageType: comment
Gerrit-Change-Id: I9efc6afc8d476003d3aa7fee8c31bcaa65438674
Gerrit-PatchSet: 3
Gerrit-Project: openstack/masakari
Gerrit-Branch: master
Gerrit-Owner: Louie Kwan 
Gerrit-Reviewer: Abhishek Kekane 
Gerrit-Reviewer: Adam Spiers 
Gerrit-Reviewer: Dinesh Bhor 
Gerrit-Reviewer: Greg Waines 
Gerrit-Reviewer: Hieu LE 
Gerrit-Reviewer: Ken Young 
Gerrit-Reviewer: Li Yingjun 
Gerrit-Reviewer: Louie Kwan 
Gerrit-Reviewer: NTT system-fault-ci masakari-integration-ci 

Gerrit-Reviewer: Rikimaru Honjo 
Gerrit-Reviewer: Sampath Priyankara (samP) 
Gerrit-Reviewer: Tim Bell 
Gerrit-Reviewer: Tushar Patil 
Gerrit-Reviewer: Tushar Patil 
Gerrit-Reviewer: Zuul
Gerrit-Reviewer: takahara.kengo 
Gerrit-Reviewer: wangqiang 
Gerrit-Reviewer: wangqiang-bj 
Gerrit-Reviewer: zhangyanying 
Gerrit-HasComments: No

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [nova][api] Novaclient redirect endpoint https into http

2018-07-05 Thread Alex Xu
2018-07-06 10:03 GMT+08:00 Alex Xu :

>
>
> 2018-07-06 2:55 GMT+08:00 melanie witt :
>
>> +openstack-dev@
>>
>> On Wed, 4 Jul 2018 14:50:26 +, Bogdan Katynski wrote:
>>
>>> But, I can not use nova command, endpoint nova have been redirected from
 https to http. Here:http://prntscr.com/k2e8s6  (command: nova
 –insecure service list)

>>> First of all, it seems that the nova client is hitting /v2.1 instead of
>>> /v2.1/ URI and this seems to be triggering the redirect.
>>>
>>> Since openstack CLI works, I presume it must be using the correct URL
>>> and hence it’s not getting redirected.
>>>
>>>   And this is error log: Unable to establish connection tohttp://
 192.168.30.70:8774/v2.1/: ('Connection aborted.', BadStatusLine("''",))


>>> Looks to me that nova-api does a redirect to an absolute URL. I suspect
>>> SSL is terminated on the HAProxy and nova-api itself is configured without
>>> SSL so it redirects to an http URL.
>>>
>>> In my opinion, nova would be more load-balancer friendly if it used a
>>> relative URI in the redirect but that’s outside of the scope of this
>>> question and since I don’t know the context behind choosing the absolute
>>> URL, I could be wrong on that.
>>>
>>
>> Thanks for mentioning this. We do have a bug open in python-novaclient
>> around a similar issue [1]. I've added comments based on this thread and
>> will consult with the API subteam to see if there's something we can do
>> about this in nova-api.
>>
>>
> Emm...check with the RFC, it said the value of Location header is absolute
> URL https://tools.ietf.org/html/rfc2616.html#section-14.30
>

Sorry, correct that. the RFC7231 updated that. The relativeURL is ok.
https://tools.ietf.org/html/rfc7231#section-7.1.2


>
>
>> -melanie
>>
>> [1] https://bugs.launchpad.net/python-novaclient/+bug/1776928
>>
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [nova][api] Novaclient redirect endpoint https into http

2018-07-05 Thread Alex Xu
2018-07-06 2:55 GMT+08:00 melanie witt :

> +openstack-dev@
>
> On Wed, 4 Jul 2018 14:50:26 +, Bogdan Katynski wrote:
>
>> But, I can not use nova command, endpoint nova have been redirected from
>>> https to http. Here:http://prntscr.com/k2e8s6  (command: nova –insecure
>>> service list)
>>>
>> First of all, it seems that the nova client is hitting /v2.1 instead of
>> /v2.1/ URI and this seems to be triggering the redirect.
>>
>> Since openstack CLI works, I presume it must be using the correct URL and
>> hence it’s not getting redirected.
>>
>>   And this is error log: Unable to establish connection tohttp://
>>> 192.168.30.70:8774/v2.1/: ('Connection aborted.', BadStatusLine("''",))
>>>
>>>
>> Looks to me that nova-api does a redirect to an absolute URL. I suspect
>> SSL is terminated on the HAProxy and nova-api itself is configured without
>> SSL so it redirects to an http URL.
>>
>> In my opinion, nova would be more load-balancer friendly if it used a
>> relative URI in the redirect but that’s outside of the scope of this
>> question and since I don’t know the context behind choosing the absolute
>> URL, I could be wrong on that.
>>
>
> Thanks for mentioning this. We do have a bug open in python-novaclient
> around a similar issue [1]. I've added comments based on this thread and
> will consult with the API subteam to see if there's something we can do
> about this in nova-api.
>
>
Emm...check with the RFC, it said the value of Location header is absolute
URL https://tools.ietf.org/html/rfc2616.html#section-14.30


> -melanie
>
> [1] https://bugs.launchpad.net/python-novaclient/+bug/1776928
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [python3][tc][infra][docs] changing the documentation build PTI to use tox

2018-07-05 Thread Doug Hellmann
I have a governance patch up [1] to change the project-testing-interface
(PTI) for building documentation to restore the use of tox.

We originally changed away from tox because we wanted to have a
single standard command that anyone could use to build the documentation
for a project. It turns out that is more complicated than just
running sphinx-build in a lot of cases anyway, because of course
you have a bunch of dependencies to install before sphinx-build
will work.

Updating the job that uses sphinx directly to run under python 3,
while allowing the transition to be self-testing, was going to
require writing some extra complexity to look at something in the
repository to decide what version of python to use.  Since tox
handles that for us by letting us set basepython in the virtualenv
configuration, it seemed more straightforward to go back to using
tox.

So, this new PTI definition restores the use of tox and specifies
a "docs" environment. I have started defining the relevant jobs [2]
and project templates [3], and I will be updating the python3-first
transition plan as well.

Let me know if you have any questions about any of that,
Doug

[1] https://review.openstack.org/#/c/580495/
[2] 
https://review.openstack.org/#/q/project:openstack-infra/project-config+topic:python3-first
[3] 
https://review.openstack.org/#/q/project:openstack-infra/openstack-zuul-jobs+topic:python3-first

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-05 Thread Fox, Kevin M
Interesting. Thanks for the link. :)

There is a lot of stuff there, so not sure it covers the part I'm talking about 
without more review. but if it doesn't it would be pretty easy to add by the 
looks of it.

Kevin

From: Jeremy Stanley [fu...@yuggoth.org]
Sent: Thursday, July 05, 2018 10:47 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

On 2018-07-05 17:30:23 + (+), Fox, Kevin M wrote:
[...]
> Deploying k8s doesn't need a general solution to deploying generic
> base OS's. Just enough OS to deploy K8s and then deploy everything
> on top in containers. Deploying a seed k8s with minikube is pretty
> trivial. I'm not suggesting a solution here to provide generic
> provisioning to every use case in the datacenter. But enough to
> get a k8s based cluster up and self hosted enough where you could
> launch other provisioning/management tools in that same cluster,
> if you need that. It provides a solid base for the datacenter on
> which you can easily add the services you need for dealing with
> everything.
>
> All of the microservices I mentioned can be wrapped up in a single
> helm chart and deployed with a single helm install command.
>
> I don't have permission to release anything at the moment, so I
> can't prove anything right now. So, take my advice with a grain of
> salt. :)
[...]

Anything like http://www.airshipit.org/ ?
--
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-05 Thread Fox, Kevin M
I use RDO in production. Its pretty far from RedHat OpenStack. though its been 
a while since I tried the TripleO part of RDO. Is it pretty well integrated 
now? Similar to RedHat OpenStack? or is it more Fedora like then CentOS like?

Thanks,
Kevin

From: Dmitry Tantsur [dtant...@redhat.com]
Sent: Thursday, July 05, 2018 11:17 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26




On Thu, Jul 5, 2018, 19:31 Fox, Kevin M 
mailto:kevin@pnnl.gov>> wrote:
We're pretty far into a tangent...

/me shrugs. I've done it. It can work.

Some things your right. deploying k8s is more work then deploying ansible. But 
what I said depends on context. If your goal is to deploy k8s/manage k8s then 
having to learn how to use k8s is not a big ask. adding a different tool such 
as ansible is an extra cognitive dependency. Deploying k8s doesn't need a 
general solution to deploying generic base OS's. Just enough OS to deploy K8s 
and then deploy everything on top in containers. Deploying a seed k8s with 
minikube is pretty trivial. I'm not suggesting a solution here to provide 
generic provisioning to every use case in the datacenter. But enough to get a 
k8s based cluster up and self hosted enough where you could launch other 
provisioning/management tools in that same cluster, if you need that. It 
provides a solid base for the datacenter on which you can easily add the 
services you need for dealing with everything.

All of the microservices I mentioned can be wrapped up in a single helm chart 
and deployed with a single helm install command.

I don't have permission to release anything at the moment, so I can't prove 
anything right now. So, take my advice with a grain of salt. :)

Switching gears, you said why would users use lfs when they can use a distro, 
so why use openstack without a distro. I'd say, today unless you are paying a 
lot, there isn't really an equivalent distro that isn't almost as much effort 
as lfs when you consider day2 ops. To compare with Redhat again, we have a RHEL 
(redhat openstack), and Rawhide (devstack) but no equivalent of CentOS. Though 
I think TripleO has been making progress on this front...

It's RDO what you're looking for (equivalent of centos). TripleO is an 
installer project, not a distribution.


Anyway. This thread is I think 2 tangents away from the original topic now. If 
folks are interested in continuing this discussion, lets open a new thread.

Thanks,
Kevin


From: Dmitry Tantsur [dtant...@redhat.com]
Sent: Wednesday, July 04, 2018 4:24 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

Tried hard to avoid this thread, but this message is so much wrong..

On 07/03/2018 09:48 PM, Fox, Kevin M wrote:
> I don't dispute trivial, but a self hosting k8s on bare metal is not 
> incredibly hard. In fact, it is easier then you might think. k8s is a 
> platform for deploying/managing services. Guess what you need to provision 
> bare metal? Just a few microservices. A dhcp service. dhcpd in a daemonset 
> works well. some pxe infrastructure. pixiecore with a simple http backend 
> works pretty well in practice. a service to provide installation 
> instructions. nginx server handing out kickstart files for example. and a 
> place to fetch rpms from in case you don't have internet access or want to 
> ensure uniformity. nginx server with a mirror yum repo. Its even possible to 
> seed it on minikube and sluff it off to its own cluster.
>
> The main hard part about it is currently no one is shipping a reference 
> implementation of the above. That may change...
>
> It is certainly much much easier then deploying enough OpenStack to get a 
> self hosting ironic working.

Side note: no, it's not. What you describe is similarly hard to installing
standalone ironic from scratch and much harder than using bifrost for
everything. Especially when you try to do it in production. Especially with
unusual operating requirements ("no TFTP servers on my network").

Also, sorry, I cannot resist:
"Guess what you need to orchestrate containers? Just a few things. A container
runtime. Docker works well. some remove execution tooling. ansible works pretty
well in practice. It is certainly much much easier then deploying enough k8s to
get a self hosting containers orchestration working."

Such oversimplications won't bring us anywhere. Sometimes things are hard
because they ARE hard. Where are people complaining that installing a full
GNU/Linux distributions from upstream tarballs is hard? How many operators here
use LFS as their distro? If we are okay with using a distro for GNU/Linux, why
using a distro for OpenStack causes so much contention?

>
> Thanks,
> Kevin
>
> 
> From: Jay Pipes 

Re: [openstack-dev] [Openstack] [nova][api] Novaclient redirect endpoint https into http

2018-07-05 Thread Monty Taylor

On 07/05/2018 01:55 PM, melanie witt wrote:

+openstack-dev@

On Wed, 4 Jul 2018 14:50:26 +, Bogdan Katynski wrote:
But, I can not use nova command, endpoint nova have been redirected 
from https to http. Here:http://prntscr.com/k2e8s6  (command: nova 
–insecure service list)
First of all, it seems that the nova client is hitting /v2.1 instead 
of /v2.1/ URI and this seems to be triggering the redirect.


Since openstack CLI works, I presume it must be using the correct URL 
and hence it’s not getting redirected.


And this is error log: Unable to establish connection 
tohttp://192.168.30.70:8774/v2.1/: ('Connection aborted.', 
BadStatusLine("''",))
Looks to me that nova-api does a redirect to an absolute URL. I 
suspect SSL is terminated on the HAProxy and nova-api itself is 
configured without SSL so it redirects to an http URL.


In my opinion, nova would be more load-balancer friendly if it used a 
relative URI in the redirect but that’s outside of the scope of this 
question and since I don’t know the context behind choosing the 
absolute URL, I could be wrong on that.


Thanks for mentioning this. We do have a bug open in python-novaclient 
around a similar issue [1]. I've added comments based on this thread and 
will consult with the API subteam to see if there's something we can do 
about this in nova-api.


A similar thing came up the other day related to keystone and version 
discovery. Version discovery documents tend to return full urls - even 
though relative urls would make public/internal API endpoints work 
better. (also, sometimes people don't configure things properly and the 
version discovery url winds up being incorrect)


In shade/sdk - we actually construct a wholly-new discovery url based on 
the url used for the catalog and the url in the discovery document since 
we've learned that the version discovery urls are frequently broken.


This is problematic because SOMETIMES people have public urls deployed 
as a sub-url and internal urls deployed on a port - so you have:


Catalog:
public: https://example.com/compute
internal: https://compute.example.com:1234

Version discovery:
https://example.com/compute/v2.1

When we go to combine the catalog url and the versioned url, if the user 
is hitting internal, we product 
https://compute.example.com:1234/compute/v2.1 - because we have no way 
of systemically knowing that /compute should also be stripped.


VERY LONG WINDED WAY of saying 2 things:

a) Relative URLs would be *way* friendlier (and incidentally are 
supported by keystoneauth, openstacksdk and shade - and are written up 
as being a thing people *should* support in the documents about API 
consumption)


b) Can we get agreement that changing behavior to return or redirect to 
a relative URL would not be considered an api contract break? (it's 
possible the answer to this is 'no' - so it's a real question)


Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [nova][api] Novaclient redirect endpoint https into http

2018-07-05 Thread melanie witt

+openstack-dev@

On Wed, 4 Jul 2018 14:50:26 +, Bogdan Katynski wrote:

But, I can not use nova command, endpoint nova have been redirected from https 
to http. Here:http://prntscr.com/k2e8s6  (command: nova –insecure service list)

First of all, it seems that the nova client is hitting /v2.1 instead of /v2.1/ 
URI and this seems to be triggering the redirect.

Since openstack CLI works, I presume it must be using the correct URL and hence 
it’s not getting redirected.

  
And this is error log: Unable to establish connection tohttp://192.168.30.70:8774/v2.1/: ('Connection aborted.', BadStatusLine("''",))
  

Looks to me that nova-api does a redirect to an absolute URL. I suspect SSL is 
terminated on the HAProxy and nova-api itself is configured without SSL so it 
redirects to an http URL.

In my opinion, nova would be more load-balancer friendly if it used a relative 
URI in the redirect but that’s outside of the scope of this question and since 
I don’t know the context behind choosing the absolute URL, I could be wrong on 
that.


Thanks for mentioning this. We do have a bug open in python-novaclient 
around a similar issue [1]. I've added comments based on this thread and 
will consult with the API subteam to see if there's something we can do 
about this in nova-api.


-melanie

[1] https://bugs.launchpad.net/python-novaclient/+bug/1776928




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] easily identifying how services are configured

2018-07-05 Thread Dan Prince
On Thu, 2018-07-05 at 14:13 -0400, James Slagle wrote:
> 
> I would almost rather see us organize the directories by service
> name/project instead of implementation.
> 
> Instead of:
> 
> puppet/services/nova-api.yaml
> puppet/services/nova-conductor.yaml
> docker/services/nova-api.yaml
> docker/services/nova-conductor.yaml
> 
> We'd have:
> 
> services/nova/nova-api-puppet.yaml
> services/nova/nova-conductor-puppet.yaml
> services/nova/nova-api-docker.yaml
> services/nova/nova-conductor-docker.yaml
> 
> (or perhaps even another level of directories to indicate
> puppet/docker/ansible?)

I'd be open to this but doing changes on this scale is a much larger
developer and user impact than what I was thinking we would be willing
to entertain for the issue that caused me to bring this up (i.e. how to
identify services which get configured by Ansible).

Its also worth noting that many projects keep these sorts of things in
different repos too. Like Kolla fully separates kolla-ansible and
kolla-kubernetes as they are quite divergent. We have been able to
preserve some of our common service architectures but as things move
towards kubernetes we may which to change things structurally a bit
too.

Dan


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-05 Thread Dmitry Tantsur
On Thu, Jul 5, 2018, 19:31 Fox, Kevin M  wrote:

> We're pretty far into a tangent...
>
> /me shrugs. I've done it. It can work.
>
> Some things your right. deploying k8s is more work then deploying ansible.
> But what I said depends on context. If your goal is to deploy k8s/manage
> k8s then having to learn how to use k8s is not a big ask. adding a
> different tool such as ansible is an extra cognitive dependency. Deploying
> k8s doesn't need a general solution to deploying generic base OS's. Just
> enough OS to deploy K8s and then deploy everything on top in containers.
> Deploying a seed k8s with minikube is pretty trivial. I'm not suggesting a
> solution here to provide generic provisioning to every use case in the
> datacenter. But enough to get a k8s based cluster up and self hosted enough
> where you could launch other provisioning/management tools in that same
> cluster, if you need that. It provides a solid base for the datacenter on
> which you can easily add the services you need for dealing with everything.
>
> All of the microservices I mentioned can be wrapped up in a single helm
> chart and deployed with a single helm install command.
>
> I don't have permission to release anything at the moment, so I can't
> prove anything right now. So, take my advice with a grain of salt. :)
>
> Switching gears, you said why would users use lfs when they can use a
> distro, so why use openstack without a distro. I'd say, today unless you
> are paying a lot, there isn't really an equivalent distro that isn't almost
> as much effort as lfs when you consider day2 ops. To compare with Redhat
> again, we have a RHEL (redhat openstack), and Rawhide (devstack) but no
> equivalent of CentOS. Though I think TripleO has been making progress on
> this front...
>

It's RDO what you're looking for (equivalent of centos). TripleO is an
installer project, not a distribution.


> Anyway. This thread is I think 2 tangents away from the original topic
> now. If folks are interested in continuing this discussion, lets open a new
> thread.
>
> Thanks,
> Kevin
>
> 
> From: Dmitry Tantsur [dtant...@redhat.com]
> Sent: Wednesday, July 04, 2018 4:24 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26
>
> Tried hard to avoid this thread, but this message is so much wrong..
>
> On 07/03/2018 09:48 PM, Fox, Kevin M wrote:
> > I don't dispute trivial, but a self hosting k8s on bare metal is not
> incredibly hard. In fact, it is easier then you might think. k8s is a
> platform for deploying/managing services. Guess what you need to provision
> bare metal? Just a few microservices. A dhcp service. dhcpd in a daemonset
> works well. some pxe infrastructure. pixiecore with a simple http backend
> works pretty well in practice. a service to provide installation
> instructions. nginx server handing out kickstart files for example. and a
> place to fetch rpms from in case you don't have internet access or want to
> ensure uniformity. nginx server with a mirror yum repo. Its even possible
> to seed it on minikube and sluff it off to its own cluster.
> >
> > The main hard part about it is currently no one is shipping a reference
> implementation of the above. That may change...
> >
> > It is certainly much much easier then deploying enough OpenStack to get
> a self hosting ironic working.
>
> Side note: no, it's not. What you describe is similarly hard to installing
> standalone ironic from scratch and much harder than using bifrost for
> everything. Especially when you try to do it in production. Especially with
> unusual operating requirements ("no TFTP servers on my network").
>
> Also, sorry, I cannot resist:
> "Guess what you need to orchestrate containers? Just a few things. A
> container
> runtime. Docker works well. some remove execution tooling. ansible works
> pretty
> well in practice. It is certainly much much easier then deploying enough
> k8s to
> get a self hosting containers orchestration working."
>
> Such oversimplications won't bring us anywhere. Sometimes things are hard
> because they ARE hard. Where are people complaining that installing a full
> GNU/Linux distributions from upstream tarballs is hard? How many operators
> here
> use LFS as their distro? If we are okay with using a distro for GNU/Linux,
> why
> using a distro for OpenStack causes so much contention?
>
> >
> > Thanks,
> > Kevin
> >
> > 
> > From: Jay Pipes [jaypi...@gmail.com]
> > Sent: Tuesday, July 03, 2018 10:06 AM
> > To: openstack-dev@lists.openstack.org
> > Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26
> >
> > On 07/02/2018 03:31 PM, Zane Bitter wrote:
> >> On 28/06/18 15:09, Fox, Kevin M wrote:
> >>>* made the barrier to testing/development as low as 'curl
> >>> http://..minikube; minikube start' (this spurs adoption and
> >>> contribution)
> >>
> >> That's not so different from devstack though.
> >>
> 

Re: [openstack-dev] [TripleO] easily identifying how services are configured

2018-07-05 Thread James Slagle
On Thu, Jul 5, 2018 at 1:50 PM, Dan Prince  wrote:
> Last week I was tinkering with my docker configuration a bit and was a
> bit surprised that puppet/services/docker.yaml no longer used puppet to
> configure the docker daemon. It now uses Ansible [1] which is very cool
> but brings up the question of how should we clearly indicate to
> developers and users that we are using Ansible vs Puppet for
> configuration?
>
> TripleO has been around for a while now, has supported multiple
> configuration ans service types over the years: os-apply-config,
> puppet, containers, and now Ansible. In the past we've used rigid
> directory structures to identify which "service type" was used. More
> recently we mixed things up a bit more even by extending one service
> type from another ("docker" services all initially extended the
> "puppet" services to generate config files and provide an easy upgrade
> path).
>
> Similarly we now use Ansible all over the place for other things in
> many of or docker and puppet services for things like upgrades. That is
> all good too. I guess the thing I'm getting at here is just a way to
> cleanly identify which services are configured via Puppet vs. Ansible.
> And how can we do that in the least destructive way possible so as not
> to confuse ourselves and our users in the process.
>
> Also, I think its work keeping in mind that TripleO was once a multi-
> vendor project with vendors that had different preferences on service
> configuration. Also having the ability to support multiple
> configuration mechanisms in the future could once again present itself
> (thinking of Kubernetes as an example). Keeping in mind there may be a
> conversion period that could well last more than a release or two.
>
> I suggested a 'services/ansible' directory with mixed responces in our
> #tripleo meeting this week. Any other thoughts on the matter?

I would almost rather see us organize the directories by service
name/project instead of implementation.

Instead of:

puppet/services/nova-api.yaml
puppet/services/nova-conductor.yaml
docker/services/nova-api.yaml
docker/services/nova-conductor.yaml

We'd have:

services/nova/nova-api-puppet.yaml
services/nova/nova-conductor-puppet.yaml
services/nova/nova-api-docker.yaml
services/nova/nova-conductor-docker.yaml

(or perhaps even another level of directories to indicate
puppet/docker/ansible?)

Personally, such an organization is something I'm more used to. It
feels more similar to how most would expect a puppet module or ansible
role to be organized, where you have the abstraction (service
configuration) at a higher directory level than specific
implementations.

It would also lend itself more easily to adding implementations only
for specific services, and address the question of if a new top level
implementation directory needs to be created. For example, adding a
services/nova/nova-api-chef.yaml seems a lot less contentious than
adding a top level chef/services/nova-api.yaml.

It'd also be nice if we had a way to mark the default within a given
service's directory. Perhaps services/nova/nova-api-default.yaml,
which would be a new template that just consumes the default? Or
perhaps a symlink, although it was pointed out symlinks don't work in
swift containers. Still, that could possibly be addressed in our plan
upload workflows. Then the resource-registry would point at
nova-api-default.yaml. One could easily tell which is the default
without having to cross reference with the resource-registry.


-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] easily identifying how services are configured

2018-07-05 Thread Dan Prince
Last week I was tinkering with my docker configuration a bit and was a
bit surprised that puppet/services/docker.yaml no longer used puppet to
configure the docker daemon. It now uses Ansible [1] which is very cool
but brings up the question of how should we clearly indicate to
developers and users that we are using Ansible vs Puppet for
configuration?

TripleO has been around for a while now, has supported multiple
configuration ans service types over the years: os-apply-config,
puppet, containers, and now Ansible. In the past we've used rigid
directory structures to identify which "service type" was used. More
recently we mixed things up a bit more even by extending one service
type from another ("docker" services all initially extended the
"puppet" services to generate config files and provide an easy upgrade
path).

Similarly we now use Ansible all over the place for other things in
many of or docker and puppet services for things like upgrades. That is
all good too. I guess the thing I'm getting at here is just a way to
cleanly identify which services are configured via Puppet vs. Ansible.
And how can we do that in the least destructive way possible so as not
to confuse ourselves and our users in the process.

Also, I think its work keeping in mind that TripleO was once a multi-
vendor project with vendors that had different preferences on service
configuration. Also having the ability to support multiple
configuration mechanisms in the future could once again present itself
(thinking of Kubernetes as an example). Keeping in mind there may be a
conversion period that could well last more than a release or two.

I suggested a 'services/ansible' directory with mixed responces in our
#tripleo meeting this week. Any other thoughts on the matter?

Thanks,

Dan

[1] http://git.openstack.org/cgit/openstack/tripleo-heat-templates/comm
it/puppet/services/docker.yaml?id=00f5019ef28771e0b3544d0aa3110d5603d7c
159

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-05 Thread Jeremy Stanley
On 2018-07-05 17:30:23 + (+), Fox, Kevin M wrote:
[...]
> Deploying k8s doesn't need a general solution to deploying generic
> base OS's. Just enough OS to deploy K8s and then deploy everything
> on top in containers. Deploying a seed k8s with minikube is pretty
> trivial. I'm not suggesting a solution here to provide generic
> provisioning to every use case in the datacenter. But enough to
> get a k8s based cluster up and self hosted enough where you could
> launch other provisioning/management tools in that same cluster,
> if you need that. It provides a solid base for the datacenter on
> which you can easily add the services you need for dealing with
> everything.
> 
> All of the microservices I mentioned can be wrapped up in a single
> helm chart and deployed with a single helm install command.
> 
> I don't have permission to release anything at the moment, so I
> can't prove anything right now. So, take my advice with a grain of
> salt. :)
[...]

Anything like http://www.airshipit.org/ ?
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-05 Thread Fox, Kevin M
We're pretty far into a tangent...

/me shrugs. I've done it. It can work.

Some things your right. deploying k8s is more work then deploying ansible. But 
what I said depends on context. If your goal is to deploy k8s/manage k8s then 
having to learn how to use k8s is not a big ask. adding a different tool such 
as ansible is an extra cognitive dependency. Deploying k8s doesn't need a 
general solution to deploying generic base OS's. Just enough OS to deploy K8s 
and then deploy everything on top in containers. Deploying a seed k8s with 
minikube is pretty trivial. I'm not suggesting a solution here to provide 
generic provisioning to every use case in the datacenter. But enough to get a 
k8s based cluster up and self hosted enough where you could launch other 
provisioning/management tools in that same cluster, if you need that. It 
provides a solid base for the datacenter on which you can easily add the 
services you need for dealing with everything.

All of the microservices I mentioned can be wrapped up in a single helm chart 
and deployed with a single helm install command.

I don't have permission to release anything at the moment, so I can't prove 
anything right now. So, take my advice with a grain of salt. :)

Switching gears, you said why would users use lfs when they can use a distro, 
so why use openstack without a distro. I'd say, today unless you are paying a 
lot, there isn't really an equivalent distro that isn't almost as much effort 
as lfs when you consider day2 ops. To compare with Redhat again, we have a RHEL 
(redhat openstack), and Rawhide (devstack) but no equivalent of CentOS. Though 
I think TripleO has been making progress on this front...

Anyway. This thread is I think 2 tangents away from the original topic now. If 
folks are interested in continuing this discussion, lets open a new thread.

Thanks,
Kevin


From: Dmitry Tantsur [dtant...@redhat.com]
Sent: Wednesday, July 04, 2018 4:24 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

Tried hard to avoid this thread, but this message is so much wrong..

On 07/03/2018 09:48 PM, Fox, Kevin M wrote:
> I don't dispute trivial, but a self hosting k8s on bare metal is not 
> incredibly hard. In fact, it is easier then you might think. k8s is a 
> platform for deploying/managing services. Guess what you need to provision 
> bare metal? Just a few microservices. A dhcp service. dhcpd in a daemonset 
> works well. some pxe infrastructure. pixiecore with a simple http backend 
> works pretty well in practice. a service to provide installation 
> instructions. nginx server handing out kickstart files for example. and a 
> place to fetch rpms from in case you don't have internet access or want to 
> ensure uniformity. nginx server with a mirror yum repo. Its even possible to 
> seed it on minikube and sluff it off to its own cluster.
>
> The main hard part about it is currently no one is shipping a reference 
> implementation of the above. That may change...
>
> It is certainly much much easier then deploying enough OpenStack to get a 
> self hosting ironic working.

Side note: no, it's not. What you describe is similarly hard to installing
standalone ironic from scratch and much harder than using bifrost for
everything. Especially when you try to do it in production. Especially with
unusual operating requirements ("no TFTP servers on my network").

Also, sorry, I cannot resist:
"Guess what you need to orchestrate containers? Just a few things. A container
runtime. Docker works well. some remove execution tooling. ansible works pretty
well in practice. It is certainly much much easier then deploying enough k8s to
get a self hosting containers orchestration working."

Such oversimplications won't bring us anywhere. Sometimes things are hard
because they ARE hard. Where are people complaining that installing a full
GNU/Linux distributions from upstream tarballs is hard? How many operators here
use LFS as their distro? If we are okay with using a distro for GNU/Linux, why
using a distro for OpenStack causes so much contention?

>
> Thanks,
> Kevin
>
> 
> From: Jay Pipes [jaypi...@gmail.com]
> Sent: Tuesday, July 03, 2018 10:06 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26
>
> On 07/02/2018 03:31 PM, Zane Bitter wrote:
>> On 28/06/18 15:09, Fox, Kevin M wrote:
>>>* made the barrier to testing/development as low as 'curl
>>> http://..minikube; minikube start' (this spurs adoption and
>>> contribution)
>>
>> That's not so different from devstack though.
>>
>>>* not having large silo's in deployment projects allowed better
>>> communication on common tooling.
>>>* Operator focused architecture, not project based architecture.
>>> This simplifies the deployment situation greatly.
>>>* try whenever possible to focus on just the commons and push 

Re: [openstack-dev] [cinder][security][api-wg] Adding http security headers

2018-07-05 Thread Doug Hellmann
Excerpts from Jim Rollenhagen's message of 2018-07-05 12:53:34 -0400:
> On Thu, Jul 5, 2018 at 12:40 PM, Nishant Kumar E <
> nishant.e.ku...@ericsson.com> wrote:
> 
> > Hi,
> >
> >
> >
> > I have registered a blueprint for adding http security headers -
> > https://blueprints.launchpad.net/cinder/+spec/http-security-headers
> >
> >
> >
> > Reason for introducing this change - I work for AT cloud project –
> > Network Cloud (Earlier known as AT integrated Cloud). As part of working
> > there we have introduced this change within all the services as kind of a
> > downstream change but would like to see it a part of upstream community.
> > While we did not face any major threats without this change but during our
> > investigation process we found that if dealing with web services we should
> > maximize the security as much as possible and came up with a list of HTTP
> > security headers that we should include as part of the OpenStack services.
> > I would like to introduce this change as part of cinder to start off and
> > then propagate this to all the services.
> >
> >
> >
> > Some reference links which might give more insight into this:
> >
> >- https://www.owasp.org/index.php/OWASP_Secure_Headers_
> >Project#tab=Headers
> >- https://www.keycdn.com/blog/http-security-headers/
> >- https://securityintelligence.com/an-introduction-to-http-
> >response-headers-for-security/
> >
> > Please let me know if this looks good and whether it can be included as
> > part of Cinder followed by other services. More details on how the
> > implementation will be done is mentioned as part of the blueprint but any
> > better ideas for implementation is welcomed too !!
> >
> 
> Wouldn't this be a job for the HTTP server in front of cinder (or whatever
> service)? Especially "Strict-Transport-Security" as one shouldn't be
> enabling that without ensuring a correct TLS config.
> 
> Bonus points in that upstream wouldn't need any changes, and we won't need
> to change every project. :)
> 
> // jim

Yes, this feels very much like something the deployment tools should
do when they set up Apache or uWSGI or whatever service is in front
of each API WSGI service.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-sig/news

2018-07-05 Thread Chris Dent


Greetings OpenStack community,

At today's meeting we discussed an issue that came up on a nova/placement 
review [9] wherein there was some indecision about whether a response code of 
400 or 404 is more appropriate when a path segement expects a UUID, the request 
doesn't supply something that is actually a UUID, and the method being used on 
the URI may be creating a resource. We agreed with the earlier discussion that 
a 400 was approrpiate in this narrow case. Other cases may be different.

With that warm up exercise out of the way, we moved on to discussing pending 
guidelines, freezing one of them [10] and declaring that another [11] required 
a followup to clarify the format of strings codes used in error responses.

After that, we did some group learning about StoryBoard [8]. This is becoming 
something of a regular activity.

As always if you're interested in helping out, in addition to coming to the 
meetings, there's also:

* The list of bugs [5] indicates several missing or incomplete guidelines.
* The existing guidelines [2] always need refreshing to account for changes 
over time. If you find something that's not quite right, submit a patch [6] to 
fix it.
* Have you done something for which you think guidance would have made things 
easier but couldn't find any? Submit a patch and help others [6].

# Newly Published Guidelines

None

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

* Expand error code document to expect clarity
  https://review.openstack.org/#/c/577118/

# Guidelines Currently Under Review [3]

* Add links to errors-example.json
  https://review.openstack.org/#/c/578369/

* Update parameter names in microversion sdk spec
  https://review.openstack.org/#/c/557773/

* Add API-schema guide (still being defined)
  https://review.openstack.org/#/c/524467/

* A (shrinking) suite of several documents about doing version and service 
discovery
  Start at https://review.openstack.org/#/c/459405/

* WIP: microversion architecture archival doc (very early; not yet ready for 
review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API SIG about APIs that you are 
developing or changing, please address your concerns in an email to the OpenStack 
developer mailing list[1] with the tag "[api]" in the subject. In your email, 
you should include any relevant reviews, links, and comments to help guide the discussion 
of the specific challenge you are facing.

To learn more about the API SIG mission and the work we do, see our wiki page 
[4] and guidelines [2].

Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z
[4] https://wiki.openstack.org/wiki/API_SIG
[5] https://bugs.launchpad.net/openstack-api-wg
[6] https://git.openstack.org/cgit/openstack/api-wg
[7] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131881.html
[8] https://storyboard.openstack.org/#!/project/1039
[9] https://review.openstack.org/#/c/580373/
[10] https://review.openstack.org/#/c/577118/
[11] https://review.openstack.org/#/c/578369/


Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_sig/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][security][api-wg] Adding http security headers

2018-07-05 Thread Jim Rollenhagen
On Thu, Jul 5, 2018 at 12:40 PM, Nishant Kumar E <
nishant.e.ku...@ericsson.com> wrote:

> Hi,
>
>
>
> I have registered a blueprint for adding http security headers -
> https://blueprints.launchpad.net/cinder/+spec/http-security-headers
>
>
>
> Reason for introducing this change - I work for AT cloud project –
> Network Cloud (Earlier known as AT integrated Cloud). As part of working
> there we have introduced this change within all the services as kind of a
> downstream change but would like to see it a part of upstream community.
> While we did not face any major threats without this change but during our
> investigation process we found that if dealing with web services we should
> maximize the security as much as possible and came up with a list of HTTP
> security headers that we should include as part of the OpenStack services.
> I would like to introduce this change as part of cinder to start off and
> then propagate this to all the services.
>
>
>
> Some reference links which might give more insight into this:
>
>- https://www.owasp.org/index.php/OWASP_Secure_Headers_
>Project#tab=Headers
>- https://www.keycdn.com/blog/http-security-headers/
>- https://securityintelligence.com/an-introduction-to-http-
>response-headers-for-security/
>
> Please let me know if this looks good and whether it can be included as
> part of Cinder followed by other services. More details on how the
> implementation will be done is mentioned as part of the blueprint but any
> better ideas for implementation is welcomed too !!
>

Wouldn't this be a job for the HTTP server in front of cinder (or whatever
service)? Especially "Strict-Transport-Security" as one shouldn't be
enabling that without ensuring a correct TLS config.

Bonus points in that upstream wouldn't need any changes, and we won't need
to change every project. :)

// jim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] Reminder about Oslo feature freeze

2018-07-05 Thread Ben Nemec

Hi,

This is just a reminder that Oslo observes feature freeze earlier than 
other projects so those projects have time to implement any new features 
from Oslo.  Per the policy[1] we freeze one week before the non-client 
library feature freeze, which is coming in two weeks.  Therefore, we 
have about one week to land new features in Oslo.  Anything that misses 
the deadline will most likely need to wait until Stein.


Feel free to contact the Oslo team with any comments or questions.  Thanks.

-Ben

1: 
http://specs.openstack.org/openstack/oslo-specs/specs/policy/feature-freeze.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][security][api-wg] Adding http security headers

2018-07-05 Thread Nishant Kumar E
Hi,

I have registered a blueprint for adding http security headers - 
https://blueprints.launchpad.net/cinder/+spec/http-security-headers

Reason for introducing this change - I work for AT cloud project - Network 
Cloud (Earlier known as AT integrated Cloud). As part of working there we 
have introduced this change within all the services as kind of a downstream 
change but would like to see it a part of upstream community. While we did not 
face any major threats without this change but during our investigation process 
we found that if dealing with web services we should maximize the security as 
much as possible and came up with a list of HTTP security headers that we 
should include as part of the OpenStack services. I would like to introduce 
this change as part of cinder to start off and then propagate this to all the 
services.

Some reference links which might give more insight into this:

  *   https://www.owasp.org/index.php/OWASP_Secure_Headers_Project#tab=Headers
  *   https://www.keycdn.com/blog/http-security-headers/
  *   
https://securityintelligence.com/an-introduction-to-http-response-headers-for-security/
Please let me know if this looks good and whether it can be included as part of 
Cinder followed by other services. More details on how the implementation will 
be done is mentioned as part of the blueprint but any better ideas for 
implementation is welcomed too !!

Thanks and Regards,
Nishant


Regards,
Nishant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] No meeting today July 5

2018-07-05 Thread Tom Barron
We have a fair number of team members taking a holiday today and no 
new agenda items were added this week so let's skip today's community 
meeting.  Next manila community meeting will be July 12 at 1500 UTC.


https://wiki.openstack.org/wiki/Manila/Meetings

Let's keep up with the reviews on outstanding work that needs to 
complete by Milestone 3:


https://etherpad.openstack.org/p/manila-rocky-review-focus

Thanks!

-- Tom Barron (tbarron)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Technical Committee Update for 3 July

2018-07-05 Thread Doug Hellmann
Excerpts from Hongbin Lu's message of 2018-07-03 22:06:41 -0400:
> >
> > Discussions about affiliation diversity continue in two directions.
> > Zane's proposal for requirements for new project teams has stalled a
> > bit. The work Thierry and Mohammed have done on the diversity tags has
> > brought a new statistics script and a proposal to drop the use of the
> > tags in favor of folding the diversity information into the more general
> > health checks we are doing. Thierry has updated the health tracker page
> >
> 
> Hi,
> 
> If appropriate, I would rather to nominate myself as the liaison for the
> Zun project. I am the first PTL of the project and familiar with the
> current status. I should be more appropriate for doing the health
> evaluation for this project. Please let me know if it is possible for me to
> participant.
> 
> Best regards,
> Hongbin

The point of the health check process is to have the TC actively
reach out to each team to see how things are going and identify
potential issues before they turn into full blown problems. So,
while I'm sure Zane and Thierry would welcome your input, we want
them to draw their own conclusions about the state of the project.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release-announce][ironic] ironic 11.0.0 (rocky)

2018-07-05 Thread Doug Hellmann
I want to compliment the Ironic team on writing such engaging and
comprehensive release notes. Nice work!

Doug

Excerpts from no-reply's message of 2018-07-05 10:24:19 +:
> We are gleeful to announce the release of:
> 
> ironic 11.0.0: OpenStack Bare Metal Provisioning
> 
> This release is part of the rocky release series.
> 
> The source is available from:
> 
> https://git.openstack.org/cgit/openstack/ironic
> 
> Download the package from:
> 
> https://tarballs.openstack.org/ironic/
> 
> Please report issues through launchpad:
> 
> https://bugs.launchpad.net/ironic
> 
> For more details, please see below.
> 
> 11.0.0
> ^^
> 
> 
> Prelude
> ***
> 
> I R O N I C turns the dial to *11* In preparation for the OpenStack
> Rocky development cycle release, the "ironic" Bare Metal as a Service
> team announces the release of version 11.0. While it is not quite like
> a volume knob, this release lays the foundation for features coming in
> future releases and user experience enhancements. Some of these
> include the BIOS configuration framework, power fault recovery,
> additonal error handling, refactoring, removal of classic drivers, and
> many bug fixes.
> 
> 
> New Features
> 
> 
> * Adds the healthcheck middleware from oslo, configurable via the
>   "[healthcheck]/enabled" option. This middleware adds a status check
>   at */healthcheck*. This is useful for load balancers to determine if
>   a service is up (and add or remove it from rotation), or for
>   monitoring tools to see the health of the server. This endpoint is
>   unauthenticated, as not all load balancers or monitoring tools
>   support authenticating with a health check endpoint.
> 
> * Adds support to abort the inspection of a node in the "inspect
>   wait" state, as long as this operation is supported by the inspect
>   interface in use. A node in the "inspect wait" state accepts the
>   "abort" provisioning verb to initiate the abort process. This
>   feature is supported by the "inspector" inspect interface and is
>   available starting with API version 1.41.
> 
> * Adds support for reading and changing the node's "bios_interface"
>   field and enables the GET endpoints to check BIOS settings, if they
>   have already been cached. This requires a compatible
>   "bios_interface" to be set. This feature is available starting with
>   API version 1.40.
> 
> * The new ironic configuration setting "[deploy]/default_boot_mode"
>   allows the operator to set the default boot mode when ironic can't
>   pick boot mode automatically based on node configuration, hardware
>   capabilities, or bare-metal machine configuration.
> 
> * Adds support to the "redfish" management interface for reading and
>   setting bare metal node's boot mode.
> 
> * Adds new Power Distribution Unit (PDU) "snmp" driver type -
>   BayTech MRP27.
> 
> * Adds new "auto" type of the "driver_info/snmp_driver" setting
>   which makes ironic automatically select a suitable SNMP driver type
>   based on the "SNMPv2-MIB::sysObjectID" value as reported by the PDU
>   being managed.
> 
> * Adds SNMPv3 message authentication and encryption features to
>   ironic "snmp" hardware type. To enable these features, the following
>   parameters should be used in the node's "driver_info":
> 
>   * "snmp_user"
> 
>   * "snmp_auth_protocol"
> 
>   * "snmp_auth_key"
> 
>   * "snmp_priv_protocol"
> 
>   * "snmp_priv_key"
> 
>   Also adds support for the "context_engine_id" and "context_name"
>   parameters of SNMPv3 message at ironic "snmp" hardware type. They
>   can be configured in the node's "driver_info".
> 
> * Add "?detail=" boolean query to the API list endpoints to provide
>   a more RESTful alternative to the existing "/nodes/detail" and
>   similar endpoints. The default is False. Now these API requests are
>   possible:
> 
>   * "/nodes?detail=True"
> 
>   * "/ports?detail=True"
> 
>   * "/chassis?detail=True"
> 
>   * "/portgroups?detail=True"
> 
> * Adds "external" storage interface which is short for "externally
>   managed". This adds logic to allow the Bare Metal service to
>   identify when a BFV scenario is being requested based upon the
>   configuration set for "volume targets".
> 
>   The user must create the entry, and no syncronizaiton with a Block
>   Storage service will occur. Documentation
>   (https://docs.openstack.org/ironic/latest/admin/boot-from-
>   volume.html#use-without-cinder) has been updated to reflect how to
>   use this interface.
> 
> * Adds the "[deploy]enable_ata_secure_erase" option which allows an
>   operator to disable ATA Secure Erase for all nodes being managed by
>   the conductor. This setting defaults to "True" which aligns with the
>   prior behavior of the Bare Metal service.
> 
> * Adds new parameter fields to driver_info, which will become
>   mandatory in Stein release:
> 
>   * "xclarity_manager_ip": IP address of the XClarity Controller.
> 
>   * "xclarity_username": Username for the XClarity 

Re: [openstack-dev] [all] log-classify project update (anomaly detection in CI/CD logs)

2018-07-05 Thread Tristan Cacqueray

On July 3, 2018 7:39 am, Tristan Cacqueray wrote:
[...] 

There is a lot to do and it will be challening. To that effect, I would
like to propose an initial meeting with all interested parties.
Please register your irc name and timezone in this etherpad:

  https://etherpad.openstack.org/p/log-classify


So far, the mean timezone is UTC+1.75, I've added date proposal from the
16th to the 20th of July. Please adds a '+' to the one you can attend.
I'll follow-up next week with an ical file for the most popular.

Thanks,
-Tristan


pgpjMJACIgiww.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [publiccloud-wg] Meeting this afternoon for Public Cloud WG

2018-07-05 Thread Jean-Daniel Bonnetot
Sorry guys, I'm not available once again.
See you next time.

Jean-Daniel Bonnetot
ovh.com  | @pilgrimstack
 

On 05/07/2018 09:59, "Tobias Rydberg"  wrote:

Hi folks,

Time for a new meeting for the Public Cloud WG. Agenda draft can be 
found at https://etherpad.openstack.org/p/publiccloud-wg, feel free to 
add items to that list.

See you all at IRC 1400 UTC in #openstack-publiccloud

Cheers,
Tobias

-- 
Tobias Rydberg
Senior Developer
Twitter & IRC: tobberydberg

www.citynetwork.eu | www.citycloud.com

INNOVATION THROUGH OPEN IT INFRASTRUCTURE
ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [publiccloud-wg] Meeting this afternoon for Public Cloud WG

2018-07-05 Thread Tobias Rydberg

Hi folks,

Time for a new meeting for the Public Cloud WG. Agenda draft can be 
found at https://etherpad.openstack.org/p/publiccloud-wg, feel free to 
add items to that list.


See you all at IRC 1400 UTC in #openstack-publiccloud

Cheers,
Tobias

--
Tobias Rydberg
Senior Developer
Twitter & IRC: tobberydberg

www.citynetwork.eu | www.citycloud.com

INNOVATION THROUGH OPEN IT INFRASTRUCTURE
ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Technical Committee Update for 3 July

2018-07-05 Thread Thierry Carrez

Jeremy Stanley wrote:

On 2018-07-04 16:38:41 -0500 (-0500), Sean McGinnis wrote:
[...]

I would propose based on this lack of feedback that we go back to
just having our predesignated office hour times, and anyone
interested in catching up on what, if anything, was discussed
during office hours can go to the the point in the IRC logs that
they are interested in.

[...]

Heartily seconded.


Thirded.

Office hours were meant to encourage gathering around specific times (1) 
to increase the odds of reaching critical mass necessary for discussion, 
and (2) to ensure presence for outsiders wanting to reach out to the TC.


The meeting bot enforces a "start" and an "end" to the discussion. It 
makes the hour busier. It encourages the discussion to stop rather than 
to continue outside of the designated times. It discourages random 
discussions outside the hour (since it won't be logged the same). And 
imho discourages external questions (since they would be "on the record" 
and interrupt busy discussions). So yes, I would prefer it to end.


Since I don't like to shut down an experiment without proposing 
something else, here would be my suggestion. I would like to see a 
middle way between raw logs and meeting reports -- a way to take notes 
on a discussion channel the same way we document a meeting, but without 
a start, an end. Automatically producing a report with #info #agree 
#links every day or week, not changing topics or requiring chairs, or 
start/endmeeting.


Then you get the benefit of a summary (with links to raw logs) without 
constraining the discussion to specific "hours". If we are good at 
documenting, it might even reduce the need to read all logs for the 
channel -- just check the summary for interesting mentions and follow 
links if interested. The bot could even serve yesterday's report to you 
in privmsg if you asked for it. That feature would, I believe, be reused 
in other channels.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev