Re: [openstack-dev] [all][stable][ptls] Tagging mitaka as EOL

2017-07-11 Thread Tony Breeds
On Mon, Jul 10, 2017 at 11:33:51AM +, Gary Kotton wrote:
> Hi,
> Will this also be moving to eol -
> https://github.com/openstack/requirements/blob/stable/mitaka/global-requirements.txt?

We need to be very careful EOLing that repo (if at all) as project have
deep links to the upper-constratints.txt files in that repo.
If we remove that branch they'll all break and with a couple of
different behaviours (depending on pip version).  It's a corner case but
one we can avoid :D

I think we'll do (tagg EOL) devstack once we're sure all gate jobs
aren't using it.

Tony.



signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] Subnet pool does not exist error

2017-07-11 Thread Hyunsun Moon
Thanks Liping.
FYI, I use Docker version 17.06.0-ce.

$ docker --version
Docker version 17.06.0-ce, build 02c1d87

Hyunsun


> On 12 Jul 2017, at 1:51 PM, Liping Mao (limao)  wrote:
> 
> Hi Hyunsun,
>  
> From you log:
> 2017-07-11 09:21:20.221 12 ERROR kuryr_libnetwork.utils [-] Unexpected error 
> happened: Subnet with pool f96b5ff6-4065-4eed-9854-3a54122f90c9 does not 
> exist.: KuryrException: Subnet with pool f96b5ff6-4065-4eed-9854-3a54122f90c9 
> does not exist.
>  
> It looks like failed in L1560:
> 1487 @app.route('/IpamDriver.RequestAddress', methods=['POST'])
> 1488 def ipam_request_address():
> …
> 1559 if not any(subnet) and not is_gateway:
> 1560 raise exceptions.KuryrException(
> 1561 ("Subnet with pool {0} does not exist.").format(pool_id))
>  
> This problem looks like hit the following bug I hit before:
> https://bugs.launchpad.net/kuryr-libnetwork/+bug/1677316 
> 
>  
> libnetwork has a bug here, I am not sure that bug has fixed in your docker:
> https://github.com/docker/libnetwork/issues/1702 
> 
>  
> I did not test more at that time, because it did not released at that time.
> Let me double check if it has fixed in the latest docker and update in 
> Launchpad this week.
>  
> Thanks.
>  
> Regards,
> Liping Mao
>  
> 发件人: Hyunsun Moon >
> 答复: OpenStack List  >
> 日期: 2017年7月12日 星期三 12:25
> 至: OpenStack List  >
> 主题: Re: [openstack-dev] [kuryr] Subnet pool does not exist error
>  
> Here it is.
>  
> $ cat /etc/kuryr/kuryr.conf
> [DEFAULT]
> bindir = /usr/libexec/kuryr
> capability_scope = global
> log_dir = /var/log/kuryr
> log_file = kuryr.log
>  
> [neutron]
> project_name = admin
> username = admin
> password = nova
> auth_url = http://10.1.1.161:35357/v2.0 
> auth_type = password
>  
> Hyunsun
>  
>  
>> On 12 Jul 2017, at 11:34 AM, Liping Mao (limao) > > wrote:
>>  
>> Can you help to attach your kuryr.conf configuration file?
>>  
>> Regards,
>> Liping Mao
>>  
>> 发件人: Hyunsun Moon >
>> 答复: OpenStack List > >
>> 日期: 2017年7月12日 星期三 09:44
>> 至: OpenStack List > >
>> 主题: [openstack-dev] [kuryr] Subnet pool does not exist error
>>  
>> Hi all, 
>> I’m getting an exception when creating a service with Kuryr network.
>>  
>> I created and installed kuryr plugin by building plugin rootfs from the 
>> source code.
>> Here’s what I did after that.
>>  
>> $ sudo docker network create --driver=kuryr/libnetwork2:latest 
>> --ipam-driver=kuryr/libnetwork2:latest --subnet=10.10.10.0/24 
>> --gateway=10.10.10.1 kuryr-net
>> $ sudo docker service create --name redis --network kuryr-net redis:3.0.6
>> Neutron does not get any request from Kuryr.
>> Here’s the error message collected from /var/log/kuryr/kuryr.log.
>> 2017-07-11 09:21:06.950 12 INFO kuryr_libnetwork.controllers [-] Creating 
>> subnetpool with the given pool CIDR
>> 2017-07-11 09:21:20.221 12 ERROR kuryr_libnetwork.utils [-] Unexpected error 
>> happened: Subnet with pool f96b5ff6-4065-4eed-9854-3a54122f90c9 does not 
>> exist.: KuryrException: Subnet with pool 
>> f96b5ff6-4065-4eed-9854-3a54122f90c9 does not exist.
>> 2017-07-11 09:36:28.115 13 INFO kuryr_libnetwork.controllers [-] Subnet 
>> already deleted.
>> 2017-07-11 09:36:28.121 13 CRITICAL kuryr [-] Unhandled error: IOError: 
>> write error
>> 2017-07-11 09:36:28.121 13 ERROR kuryr IOError: write error
>> 2017-07-11 09:36:28.121 13 ERROR kuryr
>> I tried creating a network with existing Neutron subnetpool as below, but 
>> the result was the same.
>> $ sudo docker network create --driver=kuryr/libnetwork2:latest 
>> --ipam-driver=kuryr/libnetwork2:latest --subnet=10.10.10.0/24 
>> --gateway=10.10.10.1 --ip-range=10.10.10.0/24 -o neutron.pool.name=kuryr 
>> --ipam-opt=neuton.pool.name=kuryr kuryr-net
>>  
>> Any suggestion?
>>  
>> Hyunsun
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
>> ?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
>> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

Re: [openstack-dev] [zun] sandbox and clearcontainers

2017-07-11 Thread Kumari, Madhuri
Hi Surya,

Please see my response inline.
Currently Zun have two drivers for managing containers: Docker and NovaDocker. 
Sandbox was initially implemented for the NovaDocker driver which we are going 
to deprecate soon.
Also we are working on making the sandbox optional for the Docker driver. See 
patch [1] for the code.

[1] https://review.openstack.org/#/c/471634/

Regards,
Madhuri


From: surya.prabha...@dell.com [mailto:surya.prabha...@dell.com]
Sent: Wednesday, July 12, 2017 4:44 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [zun] sandbox and clearcontainers

Dell - Internal Use - Confidential
Hi Folks,
I am just trying to wrap my head around zun's sandboxing and clear 
containers.   From what Hongbin told in Barcelona ( see the attached pic which 
I scrapped from his video)

[cid:image003.jpg@01D2FAF8.E8A6E1D0]

current implementation in Zun is, Sandbox is the outer container and the real 
user container is nested inside the sandbox.  I am trying to figure out how 
this is going to play out
when we have clear containers.
[Kumari, Madhuri] The sandbox container is just an infra container that manage 
IaaS resources associated with a container or a group of containers. Real 
container is only using the resources attached with the infra container not 
running inside infra container, so no other virtualization layer is involved 
here.

I envision the following scenarios:


1)  Scenario 1: where the sandbox itself is a clear container and user will 
nest another clear container inside the sandbox. This is like nested 
virtualization.

But I am not sure how this is going to work since the nested containers won't 
get VT-D cpu flags.

2)  Scenario 2: the outer sandbox is just going to be a standard docker 
container without vt-d and the inside container is going to be the real clear 
container with vt-d.  Now this

might work well but we might be losing the isolation features for the network 
and storage which lies open in the sandbox. Wont this defeat the whole purpose 
of using clear containers.

[Kumari, Madhuri] I have tried to run infra container as docker container and 
the real container as a Clear Container and it seems to work well. But I agree 
with your point that we might lose the advantage of using clear container.

So after the sandbox is made optional, we can run a clear container directly 
without any sandbox. Thus solving the issue.



I am just wondering what is the thought process for this design inside zun.  If 
this is trivial and if I am missing something please shed some light :).

Thanks
Surya ( spn )
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] Subnet pool does not exist error

2017-07-11 Thread Liping Mao (limao)
Hi Hyunsun,

From you log:

2017-07-11 09:21:20.221 12 ERROR kuryr_libnetwork.utils [-] Unexpected error 
happened: Subnet with pool f96b5ff6-4065-4eed-9854-3a54122f90c9 does not 
exist.: KuryrException: Subnet with pool f96b5ff6-4065-4eed-9854-3a54122f90c9 
does not exist.

It looks like failed in L1560:
1487 @app.route('/IpamDriver.RequestAddress', methods=['POST'])
1488 def ipam_request_address():
…
1559 if not any(subnet) and not is_gateway:
1560 raise exceptions.KuryrException(
1561 ("Subnet with pool {0} does not exist.").format(pool_id))

This problem looks like hit the following bug I hit before:
https://bugs.launchpad.net/kuryr-libnetwork/+bug/1677316

libnetwork has a bug here, I am not sure that bug has fixed in your docker:
https://github.com/docker/libnetwork/issues/1702

I did not test more at that time, because it did not released at that time.
Let me double check if it has fixed in the latest docker and update in 
Launchpad this week.

Thanks.

Regards,
Liping Mao

发件人: Hyunsun Moon 
答复: OpenStack List 
日期: 2017年7月12日 星期三 12:25
至: OpenStack List 
主题: Re: [openstack-dev] [kuryr] Subnet pool does not exist error

Here it is.

$ cat /etc/kuryr/kuryr.conf
[DEFAULT]
bindir = /usr/libexec/kuryr
capability_scope = global
log_dir = /var/log/kuryr
log_file = kuryr.log

[neutron]
project_name = admin
username = admin
password = nova
auth_url = http://10.1.1.161:35357/v2.0
auth_type = password

Hyunsun


On 12 Jul 2017, at 11:34 AM, Liping Mao (limao) 
> wrote:

Can you help to attach your kuryr.conf configuration file?

Regards,
Liping Mao

发件人: Hyunsun Moon >
答复: OpenStack List 
>
日期: 2017年7月12日 星期三 09:44
至: OpenStack List 
>
主题: [openstack-dev] [kuryr] Subnet pool does not exist error

Hi all,
I’m getting an exception when creating a service with Kuryr network.

I created and installed kuryr plugin by building plugin rootfs from the source 
code.
Here’s what I did after that.


$ sudo docker network create --driver=kuryr/libnetwork2:latest 
--ipam-driver=kuryr/libnetwork2:latest --subnet=10.10.10.0/24 
--gateway=10.10.10.1 kuryr-net

$ sudo docker service create --name redis --network kuryr-net redis:3.0.6
Neutron does not get any request from Kuryr.
Here’s the error message collected from /var/log/kuryr/kuryr.log.

2017-07-11 09:21:06.950 12 INFO kuryr_libnetwork.controllers [-] Creating 
subnetpool with the given pool CIDR

2017-07-11 09:21:20.221 12 ERROR kuryr_libnetwork.utils [-] Unexpected error 
happened: Subnet with pool f96b5ff6-4065-4eed-9854-3a54122f90c9 does not 
exist.: KuryrException: Subnet with pool f96b5ff6-4065-4eed-9854-3a54122f90c9 
does not exist.

2017-07-11 09:36:28.115 13 INFO kuryr_libnetwork.controllers [-] Subnet already 
deleted.

2017-07-11 09:36:28.121 13 CRITICAL kuryr [-] Unhandled error: IOError: write 
error

2017-07-11 09:36:28.121 13 ERROR kuryr IOError: write error

2017-07-11 09:36:28.121 13 ERROR kuryr
I tried creating a network with existing Neutron subnetpool as below, but the 
result was the same.
$ sudo docker network create --driver=kuryr/libnetwork2:latest 
--ipam-driver=kuryr/libnetwork2:latest --subnet=10.10.10.0/24 
--gateway=10.10.10.1 --ip-range=10.10.10.0/24 -o neutron.pool.name=kuryr 
--ipam-opt=neuton.pool.name=kuryr kuryr-net

Any suggestion?

Hyunsun
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zun] sandbox and clearcontainers

2017-07-11 Thread Hongbin Lu
Hi Surya,

First, I would like to provide some context for folks who are not familiar with 
the sandbox concept in Zun. The "sandbox" is for providing isolated environment 
for one or multiple containers. In docker driver, we used it as a placeholder 
of a set of Linux namespaces (i.e. network, ipc, etc.) that the "real" 
container(s) is going to run. For example, if end-user run "zun run nginx", Zun 
will first create an infra container (sandbox) and leverage the set of Linux 
namespace it creates, then Zun will create the "real" (nginx) container by 
using the Linux namespaces of the infra container. Strictly speaking, this is 
not container inside container, but it is container inside a set of 
pre-existing Linux namespaces.

Second, we are working on making sandbox optional [1]. After this feature is 
implemented (targeted on Pike), operators can configure Zun into one of the two 
modes: "container-in-sandbox" and "standalone container". Each container driver 
will have a choice to support either modes or support both. For clear 
container, I assume it can be integrated with Zun via a clear container driver. 
Then, the driver can implement the "standalone" mode, in which there is only a 
bare clear container. An alternative is to implement "container-in-sandbox" 
mode. In this scenario, the sandbox itself is a clear container as you 
mentioned. Inside the clear container, I guess there is a kernel that can be 
used to boot user's container image(s) (like how to run hypercontainer as pod 
[2]). However, I am not exactly sure if this scenario is possible.

Hope this answers your question.

[1] https://blueprints.launchpad.net/zun/+spec/make-sandbox-optional
[2] 
http://blog.kubernetes.io/2016/05/hypernetes-security-and-multi-tenancy-in-kubernetes.html

Best regards,
Hongbin

From: surya.prabha...@dell.com [mailto:surya.prabha...@dell.com]
Sent: July-11-17 7:14 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [zun] sandbox and clearcontainers

Dell - Internal Use - Confidential
Hi Folks,
I am just trying to wrap my head around zun's sandboxing and clear 
containers.   From what Hongbin told in Barcelona ( see the attached pic which 
I scrapped from his video)

[cid:image002.jpg@01D2FA9E.8B2A7D00]

current implementation in Zun is, Sandbox is the outer container and the real 
user container is nested inside the sandbox.  I am trying to figure out how 
this is going to play out
when we have clear containers.

I envision the following scenarios:


1)  Scenario 1: where the sandbox itself is a clear container and user will 
nest another clear container inside the sandbox. This is like nested 
virtualization.

But I am not sure how this is going to work since the nested containers won't 
get VT-D cpu flags.

2)  Scenario 2: the outer sandbox is just going to be a standard docker 
container without vt-d and the inside container is going to be the real clear 
container with vt-d.  Now this

might work well but we might be losing the isolation features for the network 
and storage which lies open in the sandbox. Wont this defeat the whole purpose 
of using clear containers.

I am just wondering what is the thought process for this design inside zun.  If 
this is trivial and if I am missing something please shed some light :).

Thanks
Surya ( spn )
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] CloudFlow demo available online

2017-07-11 Thread Renat Akhmerov
Thanks Guy! Great work )

Renat Akhmerov
@Nokia

On 11 Jul 2017, 16:53 +0700, Dougal Matthews , wrote:
> Thanks for this - it is really useful to be able to click around and play 
> with the UI - gives me a much better feel for it.
>
>
> > On 11 July 2017 at 08:26, Shaanan, Guy (Nokia - IL/Kfar Sava) 
> >  wrote:
> > > CloudFlow (the Mistral Workflow Visualization Tool) demo is now available 
> > > online[1] using a real Mistral Pike instance.
> > >
> > > You can evaluate the tool by selecting one of the pre-existing 
> > > executions, and:
> > >
> > > • See it’s YAML definition,
> > > • Watch the execution information,
> > > • Select a task and watch its properties, results and published values,
> > > • Select a task and see the execution path of that task,
> > > • Delve into a sub-workflow execution,
> > > • and many more…
> > >
> > >
> > > You can download the latest version from our github repo[2] and also 
> > > report bugs and ask for features.
> > >
> > > [1] http://rawgit.com/nokia/CloudFlow/master/docs/index.html
> > > [2] https://github.com/nokia/CloudFlow
> > >
> > > -
> > > Guy Shaanan
> > > CI & Internal Tools
> > > Application & Analytics, Nokia
> > > 16 Atir Yeda St. Kfar-Saba 44643, ISRAEL
> > > T: +972 9 793 3013
> > > M: +972 52 536 2986
> > > guy.shaa...@nokia.com
> > >
> > >
> > > __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] Subnet pool does not exist error

2017-07-11 Thread Hyunsun Moon
Here it is.

$ cat /etc/kuryr/kuryr.conf
[DEFAULT]
bindir = /usr/libexec/kuryr
capability_scope = global
log_dir = /var/log/kuryr
log_file = kuryr.log

[neutron]
project_name = admin
username = admin
password = nova
auth_url = http://10.1.1.161:35357/v2.0
auth_type = password

Hyunsun


> On 12 Jul 2017, at 11:34 AM, Liping Mao (limao)  wrote:
> 
> Can you help to attach your kuryr.conf configuration file?
>  
> Regards,
> Liping Mao
>  
> 发件人: Hyunsun Moon >
> 答复: OpenStack List  >
> 日期: 2017年7月12日 星期三 09:44
> 至: OpenStack List  >
> 主题: [openstack-dev] [kuryr] Subnet pool does not exist error
>  
> Hi all, 
> I’m getting an exception when creating a service with Kuryr network.
>  
> I created and installed kuryr plugin by building plugin rootfs from the 
> source code.
> Here’s what I did after that.
>  
> $ sudo docker network create --driver=kuryr/libnetwork2:latest 
> --ipam-driver=kuryr/libnetwork2:latest --subnet=10.10.10.0/24 
> --gateway=10.10.10.1 kuryr-net
> $ sudo docker service create --name redis --network kuryr-net redis:3.0.6
> Neutron does not get any request from Kuryr.
> Here’s the error message collected from /var/log/kuryr/kuryr.log.
> 2017-07-11 09:21:06.950 12 INFO kuryr_libnetwork.controllers [-] Creating 
> subnetpool with the given pool CIDR
> 2017-07-11 09:21:20.221 12 ERROR kuryr_libnetwork.utils [-] Unexpected error 
> happened: Subnet with pool f96b5ff6-4065-4eed-9854-3a54122f90c9 does not 
> exist.: KuryrException: Subnet with pool f96b5ff6-4065-4eed-9854-3a54122f90c9 
> does not exist.
> 2017-07-11 09:36:28.115 13 INFO kuryr_libnetwork.controllers [-] Subnet 
> already deleted.
> 2017-07-11 09:36:28.121 13 CRITICAL kuryr [-] Unhandled error: IOError: write 
> error
> 2017-07-11 09:36:28.121 13 ERROR kuryr IOError: write error
> 2017-07-11 09:36:28.121 13 ERROR kuryr
> I tried creating a network with existing Neutron subnetpool as below, but the 
> result was the same.
> $ sudo docker network create --driver=kuryr/libnetwork2:latest 
> --ipam-driver=kuryr/libnetwork2:latest --subnet=10.10.10.0/24 
> --gateway=10.10.10.1 --ip-range=10.10.10.0/24 -o neutron.pool.name=kuryr 
> --ipam-opt=neuton.pool.name=kuryr kuryr-net
>  
> Any suggestion?
>  
> Hyunsun
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
> ?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] Subnet pool does not exist error

2017-07-11 Thread Liping Mao (limao)
Can you help to attach your kuryr.conf configuration file?

Regards,
Liping Mao

发件人: Hyunsun Moon 
答复: OpenStack List 
日期: 2017年7月12日 星期三 09:44
至: OpenStack List 
主题: [openstack-dev] [kuryr] Subnet pool does not exist error

Hi all,
I’m getting an exception when creating a service with Kuryr network.

I created and installed kuryr plugin by building plugin rootfs from the source 
code.
Here’s what I did after that.


$ sudo docker network create --driver=kuryr/libnetwork2:latest 
--ipam-driver=kuryr/libnetwork2:latest --subnet=10.10.10.0/24 
--gateway=10.10.10.1 kuryr-net

$ sudo docker service create --name redis --network kuryr-net redis:3.0.6
Neutron does not get any request from Kuryr.
Here’s the error message collected from /var/log/kuryr/kuryr.log.

2017-07-11 09:21:06.950 12 INFO kuryr_libnetwork.controllers [-] Creating 
subnetpool with the given pool CIDR

2017-07-11 09:21:20.221 12 ERROR kuryr_libnetwork.utils [-] Unexpected error 
happened: Subnet with pool f96b5ff6-4065-4eed-9854-3a54122f90c9 does not 
exist.: KuryrException: Subnet with pool f96b5ff6-4065-4eed-9854-3a54122f90c9 
does not exist.

2017-07-11 09:36:28.115 13 INFO kuryr_libnetwork.controllers [-] Subnet already 
deleted.

2017-07-11 09:36:28.121 13 CRITICAL kuryr [-] Unhandled error: IOError: write 
error

2017-07-11 09:36:28.121 13 ERROR kuryr IOError: write error

2017-07-11 09:36:28.121 13 ERROR kuryr
I tried creating a network with existing Neutron subnetpool as below, but the 
result was the same.
$ sudo docker network create --driver=kuryr/libnetwork2:latest 
--ipam-driver=kuryr/libnetwork2:latest --subnet=10.10.10.0/24 
--gateway=10.10.10.1 --ip-range=10.10.10.0/24 -o neutron.pool.name=kuryr 
--ipam-opt=neuton.pool.name=kuryr kuryr-net

Any suggestion?

Hyunsun
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] office hours report 2017-7-7

2017-07-11 Thread Mathieu Gagné
Hi,

So this email is relevant to my interests as an operator. =)

On Tue, Jul 11, 2017 at 9:35 PM, Lance Bragstad  wrote:

> *The future of the templated catalog backend*
>
> Some issues were uncovered, or just resurfaced, with the templated catalog
> backend. The net of the discussion boiled down to - do we fix it or remove
> it? The answer actually ended up being both. It was determined that instead
> of trying to maintain and fix the existing templated backend, we should
> deprecate it for removal [0]. Since it does provide some value, it was
> suggested that we can start implementing a new backend based on YAML to
> fill the purpose instead. The advantage here is that the approach is
> directed towards a specific format (YAML). This should hopefully make
> things easier for both developers and users.
>
> [0] https://review.openstack.org/#/c/482714/​
>

We have been exclusively using the templated catalog backend for at least 5
years without any major issues. And it looks like we are now among the < 3%
using templated according to the April 2017 user survey.
¯\_(ツ)_/¯

We choose the templated catalog backend for its simplicity (especially with
our CMS) and because it makes no sense (to me) to use and rely on an
SQL server to serve what is essentially static content
​.​


Regarding the v3 catalog support, we do have an in-house fix we intended to
upstream
​ very soon (and just did right now)​
. [1]


So if the templated catalog backend gets deprecated,
​my wish would be to have access to
 a
​n alternate​
​ file based
​ implementation​, a production grade implementation
 ready to be used
​ before I get spammed with deprecation warnings in the keystone logs.

Thanks

[1] https://review.openstack.org/#/c/482766/

--
Mathieu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Should PUT /os-services be idempotent?

2017-07-11 Thread Alex Xu
2017-07-12 9:18 GMT+08:00 Matt Riedemann :

> I'm looking for some broader input on something being discussed in this
> change:
>
> https://review.openstack.org/#/c/464280/21/nova/api/openstac
> k/compute/services.py
>
> This is collapsing the following APIs into a single API:
>
> Old:
>
> * PUT /os-services/enable
> * PUT /os-services/disable
> * PUT /os-services/disable-log-reason
> * PUT /os-services/force-down
>
> New:
>
> * PUT /os-services
>
> With the old APIs, if you tried to enable and already enabled service, it
> was not an error. The same is you tried to disable an already disabled
> service. It doesn't change anything, but it's not an error.
>
> The question is coming up in the new API if trying to enable an enabled
> service should be a 400, or trying to disable a disabled service. The way I
> wrote the new API, those are no 400 conditions. They don't do anything,
> like before, but they aren't errors.
>

Sorry, I didn't describe clearly in the comment.

Some of those comments about save a DB call with more conditions checks. It
means if enable a enabled service, we needn't a db call, we can just return
to the user 200 directly.

One of those comments is about when the API user specified 'status=enabled'
and 'disabled_reason' in the request body, then we just ignore the
'disabled_reason' and didn't save it into the db also. That sounds not
right. We should return 400 to the API user, you can't specified the
'status=enabled' and 'disabled_reason'.


>
> Looking at [1] it seems this should not be an error condition if you're
> trying to update the state of a resource and it's already at that state.
>
> I don't have a PhD in REST though so would like broader discussion on this.
>
> [1] http://www.restapitutorial.com/lessons/idempotency.html
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Online video meet up this week (topic:review)

2017-07-11 Thread Rico Lin
2017-07-12 2:10 GMT+08:00 Jeremy Stanley :
>
> On 2017-07-12 01:47:02 +0800 (+0800), Rico Lin wrote:
> [...]
> > we will make our meeting this week as an online video meeting
> [...]
>
> Friendly reminder: "If the project has meetings [...] they should be
> public and in IRC. They should all be logged and published"
>
https://governance.openstack.org/tc/reference/new-projects-requirements.html
I would rather call this video meet as `meet up` as in title said,
since we will not discuss any other thing but just review and share
thought about each patch.
(Which I will definitely share the information On IRC and WIKI for sure)
>
> Also, while Zoom's service and client software may be "free" in the
> gratis sense, they are not free in the libre sense. Moving your
> meetings to a proprietary system (whether it charges money for you
> to be able to use it or not) isn't in the spirit of an open
> community and necessarily excludes participation by people who value
> software freedom.
That's a great stand of point that we all agree on (or otherwise why we're
here:)),
but through the team meeting, we can't think out for a video channel that's
happened
to be a pure open source one (and stable to use). And of course, if people
can help to provide
such an environment for us to try on, then I'm happy to give it a test :)

> --
> Jeremy Stanley
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Should PUT /os-services be idempotent?

2017-07-11 Thread Ed Leafe
On Jul 11, 2017, at 8:18 PM, Matt Riedemann  wrote:

> With the old APIs, if you tried to enable and already enabled service, it was 
> not an error. The same is you tried to disable an already disabled service. 
> It doesn't change anything, but it's not an error.
> 
> The question is coming up in the new API if trying to enable an enabled 
> service should be a 400, or trying to disable a disabled service. The way I 
> wrote the new API, those are no 400 conditions. They don't do anything, like 
> before, but they aren't errors.

These should not be errors. You are calling the API to set a particular 
condition. The result of that call is that the service is in that condition. 
That should be a 2xx, most likely a 204. So yeah, it should be idempotent.

-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Forming our plans around Ansible

2017-07-11 Thread Steve Baker
On Wed, Jul 12, 2017 at 11:47 AM, James Slagle 
wrote:

> On Tue, Jul 11, 2017 at 6:53 PM, Steve Baker  wrote:
> >
> >
> > On Tue, Jul 11, 2017 at 6:51 AM, James Slagle 
> > wrote:
> >>
> >> On Mon, Jul 10, 2017 at 11:37 AM, Lars Kellogg-Stedman  >
> >> wrote:
> >> > On Fri, Jul 7, 2017 at 1:50 PM, James Slagle 
> >> > wrote:
> >> >>
> >> >> There are also some ideas forming around pulling the Ansible
> playbooks
> >> >>
> >> >> and vars out of Heat so that they can be rerun (or run initially)
> >> >> independently from the Heat SoftwareDeployment delivery mechanism:
> >> >
> >> >
> >> > I think the closer we can come to "the operator runs ansible-playbook
> to
> >> > configure the overcloud" the better, but not because I think Ansible
> is
> >> > inherently a great tool: rather, I think the many layers of
> indirection
> >> > in
> >> > our existing model make error reporting and diagnosis much more
> >> > complicated
> >> > that it needs to be.  Combined with Puppet's "fail as late as
> possible"
> >> > model, this means that (a) operators waste time waiting for a
> deployment
> >> > that is ultimately going to fail but hasn't yet, and (b) when it does
> >> > fail,
> >> > they need relatively intimate knowledge of our deployment tools to
> >> > backtrack
> >> > through logs and find the root cause of the failure.
> >> >
> >> > If we can offer a deployment mode that reduces the number of layers
> >> > between
> >> > the operator and the actions being performed on the hosts I think we
> >> > would
> >> > win on both fronts: faster failures and reporting errors as close as
> >> > possible to the actual problem will result in less frustration across
> >> > the
> >> > board.
> >> >
> >> > I do like Steve's suggestion of a split model where Heat is
> responsible
> >> > for
> >> > instantiating OpenStack resources while Ansible is used to perform
> host
> >> > configuration tasks.  Despite all the work done on Ansible's OpenStack
> >> > modules, they feel inflexible and frustrating to work with when
> compared
> >> > to
> >> > Heat's state-aware, dependency ordered deployments.  A solution that
> >> > allows
> >> > Heat to output configuration that can subsequently be consumed by
> >> > Ansible --
> >> > either running manually or perhaps via Mistral for
> >> > API-driven-deployments --
> >> > seems like an excellent goal.  Using Heat as a "front-end" to the
> >> > process
> >> > means that we get to keep the parameter validation and documentation
> >> > that is
> >> > missing in Ansible, while still following the Unix philosophy of
> giving
> >> > you
> >> > enough rope to hang yourself if you really want it.
> >>
> >> This is excellent input, thanks for providing it.
> >>
> >> I think it lends itself towards suggesting that we may like to persue
> >> (again) adding native Ironic resources to Heat. If those were written
> >> in a way that also addressed some of the feedback about TripleO and
> >> the baremetal deployment side, then we could continue to get the
> >> advantages from Heat that you mention.
> >>
> >> My personal opinion to date is that Ansible's os_ironic* modules are
> >> superior in some ways to the Heat->Nova->Ironic model. However, just a
> >> Heat->Ironic model may work in a way that has the advantages of both.
> >
> >
> > I too would dearly like to get nova out of the picture. Our placement
> needs
> > mean the scheduler is something we need to work around, and it discards
> > basically all context for the operator when ironic can't deploy for some
> > reason.
> >
> > Whether we use a mistral workflow[1], a heat resource, or ansible
> os_ironic,
> > there will still need to be some python logic to build the config drive
> ISO
> > that injects the ssh keys and os-collect-config bootstrap.
> >
> > Unfortunately ironic iPXE boot from iSCSI[2] doesn't support config-drive
> > (still?) so the only option to inject ssh keys is the nova ec2-metadata
> > service (or equivalent). I suspect if we can't make every ironic
> deployment
> > method support config-drive then we're stuck with nova.
> >
> > I don't have a strong preference for a heat resource vs mistral vs
> ansible
> > os_ironic, but given there is some python logic required anyway, I would
> > lean towards a heat resource. If the resource is general enough we could
> > propose it to heat upstream, otherwise we could carry it in
> tripleo-common.
> >
> > Alternatively, we can implement a config-drive builder in tripleo-common
> and
> > invoke that from mistral or ansible.
>
> Ironic's cli node-set-provision-state command has a --config-drive
> option where you just point it a directory and it will automatically
> bundle that dir into the config drive ISO format.
>
> Ansible's os_ironic_node[1] also supports that via the config_drive
> parameter. Combining that with a couple of template tasks to create
> meta_data.json and user_data 

Re: [openstack-dev] [tripleo][ci] decreased coverage for telemetry

2017-07-11 Thread Wesley Hayutin
On Tue, Jul 11, 2017 at 9:04 PM, Emilien Macchi  wrote:

> On Tue, Jul 11, 2017 at 12:41 PM, Pradeep Kilambi  wrote:
> > On Tue, Jul 11, 2017 at 3:17 PM, Wesley Hayutin 
> wrote:
> >> Greetings,
> >>
> >> I was looking through the mailing list and I did not see any emails
> >> explicitly calling out the decreased coverage for telemetry in tripleo
> due
> >> to [1].  A series of changes went into the CI system to disable
> telemetry
> >> [2].
> >>
> >> There is work being done to restore more coverage for telemetry by
> limiting
> >> the resources it consumes [3].  We are also working on additional
> scenarios
> >> in t-h-t/ci/environments/ to better cover ceilometer.
> >>
> >> If the CI environment you are working in has the resources to cover
> >> ceilometer that is great, however if you find issues like [1] we highly
> >> suggest you follow the same pattern until coverage is restored upstream.
> >>
> >> Thank you!
> >>
> >> [1] https://bugs.launchpad.net/tripleo/+bug/1693174
> >> [2] https://review.openstack.org/#/q/topic:bug/1680195
> >> [3]
> >> https://review.openstack.org/#/c/475838/
> >> https://review.openstack.org/#/c/474969/
> >> https://review.openstack.org/#/c/47/
> >>
> >>
> >
> > Thanks for starting this thread Wes. I concur with this. We got bitten
> > recently by many issues that we could have caught in ci had telemetry
> > been enabled. I spoke to trown and Emilien about this a few times
> > already. I do understand the resource footprint it causes.  But with
> > recent improvements and changes upstream, things should be back to
> > being more manageable. We do have telemetry tested in scenario001 job,
> > but that doesn't cover all scenarios. So there is a gap in coverage.
>
> What do you mean by gap in coverage?
> We have scenarios on purpose, so we can horizontally scale the
> coverage across multiple jobs and run the jobs only when we need (e.g.
> touching telemetry files for scenario001).
>
> Please elaborate on what isn't covered by scenario001, because we
> already cover Gnocchi, Panko, Aodh and Ceilometer (with RBD backend
> and soon with Swift backend in scenario002).
>
>
Emilien,
Gap is the wrong word to use in the case.
Previously we had several jobs running with telemetry turned on including
ovb jobs in tripleo and other jobs outside of the upstream CI system.
The more jobs running, the more coverage.
I think that is what Pradeep was referring to, but maybe I am
misunderstanding this as well.



> >  I hope we can either re-enable these services by default in CI and
> > how things work or at least add a separate gate job to be able to test
> > HA scenario properly with telemetry enabled.
> >
> > --
> > Cheers,
> > ~ Prad
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Emilien Macchi
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] Subnet pool does not exist error

2017-07-11 Thread Hyunsun Moon
Hi all,
I’m getting an exception when creating a service with Kuryr network.

I created and installed kuryr plugin by building plugin rootfs from the source 
code.
Here’s what I did after that.

$ sudo docker network create --driver=kuryr/libnetwork2:latest 
--ipam-driver=kuryr/libnetwork2:latest --subnet=10.10.10.0/24 
--gateway=10.10.10.1 kuryr-net
$ sudo docker service create --name redis --network kuryr-net redis:3.0.6
Neutron does not get any request from Kuryr.
Here’s the error message collected from /var/log/kuryr/kuryr.log.
2017-07-11 09:21:06.950 12 INFO kuryr_libnetwork.controllers [-] Creating 
subnetpool with the given pool CIDR
2017-07-11 09:21:20.221 12 ERROR kuryr_libnetwork.utils [-] Unexpected error 
happened: Subnet with pool f96b5ff6-4065-4eed-9854-3a54122f90c9 does not 
exist.: KuryrException: Subnet with pool f96b5ff6-4065-4eed-9854-3a54122f90c9 
does not exist.
2017-07-11 09:36:28.115 13 INFO kuryr_libnetwork.controllers [-] Subnet already 
deleted.
2017-07-11 09:36:28.121 13 CRITICAL kuryr [-] Unhandled error: IOError: write 
error
2017-07-11 09:36:28.121 13 ERROR kuryr IOError: write error
2017-07-11 09:36:28.121 13 ERROR kuryr
I tried creating a network with existing Neutron subnetpool as below, but the 
result was the same.
$ sudo docker network create --driver=kuryr/libnetwork2:latest 
--ipam-driver=kuryr/libnetwork2:latest --subnet=10.10.10.0/24 
--gateway=10.10.10.1 --ip-range=10.10.10.0/24 -o neutron.pool.name=kuryr 
--ipam-opt=neuton.pool.name=kuryr kuryr-net

Any suggestion?

Hyunsun__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] office hours report 2017-7-7

2017-07-11 Thread Lance Bragstad
Hey all,

This is a summary of what was worked on today during office hours. Full
logs of the meeting can be found below:

http://eavesdrop.openstack.org/meetings/office_hours/2017/office_hours.2017-07-11-19.00.log.html

*The future of the templated catalog backend
*

Some issues were uncovered, or just resurfaced, with the templated
catalog backend. The net of the discussion boiled down to - do we fix it
or remove it? The answer actually ended up being both. It was determined
that instead of trying to maintain and fix the existing templated
backend, we should deprecate it for removal [0]. Since it does provide
some value, it was suggested that we can start implementing a new
backend based on YAML to fill the purpose instead. The advantage here is
that the approach is directed towards a specific format (YAML). This
should hopefully make things easier for both developers and users.

[0] https://review.openstack.org/#/c/482714/

*Policy fixes*

All the policy-in-code work has exposed several issues with policy
defaults in keystone. We spent time as a group going through several of
the bugs [0] [1] [2] [3], the corresponding fixes, and impact. One of
which will be backported specifically for the importance of
communicating a release note to stable users [0].

[0] https://bugs.launchpad.net/keystone/+bug/1703369
[1] https://bugs.launchpad.net/keystone/+bug/1703392
[2] https://bugs.launchpad.net/keystone/+bug/1703467
[3] https://bugs.launchpad.net/keystone/+bug/1133435

*Additional bugs worked*

Transient bug with security compliance or PCI-DSS:
https://bugs.launchpad.net/keystone/+bug/1702211
Request header issues: https://bugs.launchpad.net/keystone/+bug/1689468


I hope to find ways to automate most of what is communicated in this
summary. Until then I'm happy to hear feedback if you find the report
lacking in a specific area.


Thanks,

Lance



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Should PUT /os-services be idempotent?

2017-07-11 Thread Matt Riedemann
I'm looking for some broader input on something being discussed in this 
change:


https://review.openstack.org/#/c/464280/21/nova/api/openstack/compute/services.py

This is collapsing the following APIs into a single API:

Old:

* PUT /os-services/enable
* PUT /os-services/disable
* PUT /os-services/disable-log-reason
* PUT /os-services/force-down

New:

* PUT /os-services

With the old APIs, if you tried to enable and already enabled service, 
it was not an error. The same is you tried to disable an already 
disabled service. It doesn't change anything, but it's not an error.


The question is coming up in the new API if trying to enable an enabled 
service should be a 400, or trying to disable a disabled service. The 
way I wrote the new API, those are no 400 conditions. They don't do 
anything, like before, but they aren't errors.


Looking at [1] it seems this should not be an error condition if you're 
trying to update the state of a resource and it's already at that state.


I don't have a PhD in REST though so would like broader discussion on this.

[1] http://www.restapitutorial.com/lessons/idempotency.html

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [tc] Technical Committee office hours is starting now on #openstack-tc!

2017-07-11 Thread Emilien Macchi
Join #openstack-tc if you would like to discuss with TC members, we're
online and happy to answer any question or discuss about any topic.

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ci] decreased coverage for telemetry

2017-07-11 Thread Emilien Macchi
On Tue, Jul 11, 2017 at 12:41 PM, Pradeep Kilambi  wrote:
> On Tue, Jul 11, 2017 at 3:17 PM, Wesley Hayutin  wrote:
>> Greetings,
>>
>> I was looking through the mailing list and I did not see any emails
>> explicitly calling out the decreased coverage for telemetry in tripleo due
>> to [1].  A series of changes went into the CI system to disable telemetry
>> [2].
>>
>> There is work being done to restore more coverage for telemetry by limiting
>> the resources it consumes [3].  We are also working on additional scenarios
>> in t-h-t/ci/environments/ to better cover ceilometer.
>>
>> If the CI environment you are working in has the resources to cover
>> ceilometer that is great, however if you find issues like [1] we highly
>> suggest you follow the same pattern until coverage is restored upstream.
>>
>> Thank you!
>>
>> [1] https://bugs.launchpad.net/tripleo/+bug/1693174
>> [2] https://review.openstack.org/#/q/topic:bug/1680195
>> [3]
>> https://review.openstack.org/#/c/475838/
>> https://review.openstack.org/#/c/474969/
>> https://review.openstack.org/#/c/47/
>>
>>
>
> Thanks for starting this thread Wes. I concur with this. We got bitten
> recently by many issues that we could have caught in ci had telemetry
> been enabled. I spoke to trown and Emilien about this a few times
> already. I do understand the resource footprint it causes.  But with
> recent improvements and changes upstream, things should be back to
> being more manageable. We do have telemetry tested in scenario001 job,
> but that doesn't cover all scenarios. So there is a gap in coverage.

What do you mean by gap in coverage?
We have scenarios on purpose, so we can horizontally scale the
coverage across multiple jobs and run the jobs only when we need (e.g.
touching telemetry files for scenario001).

Please elaborate on what isn't covered by scenario001, because we
already cover Gnocchi, Panko, Aodh and Ceilometer (with RBD backend
and soon with Swift backend in scenario002).

>  I hope we can either re-enable these services by default in CI and
> how things work or at least add a separate gate job to be able to test
> HA scenario properly with telemetry enabled.
>
> --
> Cheers,
> ~ Prad
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ci] decreased coverage for telemetry

2017-07-11 Thread Wesley Hayutin
On Tue, Jul 11, 2017 at 3:41 PM, Pradeep Kilambi  wrote:

> On Tue, Jul 11, 2017 at 3:17 PM, Wesley Hayutin 
> wrote:
> > Greetings,
> >
> > I was looking through the mailing list and I did not see any emails
> > explicitly calling out the decreased coverage for telemetry in tripleo
> due
> > to [1].  A series of changes went into the CI system to disable telemetry
> > [2].
> >
> > There is work being done to restore more coverage for telemetry by
> limiting
> > the resources it consumes [3].  We are also working on additional
> scenarios
> > in t-h-t/ci/environments/ to better cover ceilometer.
> >
> > If the CI environment you are working in has the resources to cover
> > ceilometer that is great, however if you find issues like [1] we highly
> > suggest you follow the same pattern until coverage is restored upstream.
> >
> > Thank you!
> >
> > [1] https://bugs.launchpad.net/tripleo/+bug/1693174
> > [2] https://review.openstack.org/#/q/topic:bug/1680195
> > [3]
> > https://review.openstack.org/#/c/475838/
> > https://review.openstack.org/#/c/474969/
> > https://review.openstack.org/#/c/47/
> >
> >
>
> Thanks for starting this thread Wes. I concur with this. We got bitten
> recently by many issues that we could have caught in ci had telemetry
> been enabled. I spoke to trown and Emilien about this a few times
> already. I do understand the resource footprint it causes.  But with
> recent improvements and changes upstream, things should be back to
> being more manageable. We do have telemetry tested in scenario001 job,
> but that doesn't cover all scenarios. So there is a gap in coverage.
>
>  I hope we can either re-enable these services by default in CI and
> how things work or at least add a separate gate job to be able to test
> HA scenario properly with telemetry enabled.
>
> --
> Cheers,
> ~ Prad
>

While Prad and were having the conversation, I raised the point that the
tripleo
community may be more willing to turn on more coverage for ceilometer if
the
gate-tripleo-ci-centos-7-scenario001-multinode-oooq-puppet-nv job that runs
on ceilometer changes
was moved from non-voting to a voting job.

Note, we are trying to get more and more projects to run tripleo based jobs
in their check gates generally.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glare][TC] Application for inclusion of Glare in the list of official projects

2017-07-11 Thread Monty Taylor

On 07/11/2017 06:47 AM, Flavio Percoco wrote:

On 11/07/17 14:20 +0300, Mikhail Fedosin wrote:
On Tue, Jul 11, 2017 at 1:43 AM, Monty Taylor  
wrote:



On 07/10/2017 04:31 PM, Mikhail Fedosin wrote:

Third, all these changes can be hidden in Glare client. So if we try a
little, we can achieve 100% compatibility there, and other projects 
can use

Glare client instead of Glance's without even noticing the differences.



I think we should definitely not do this... I think instead, if we 
decide
to go down this road, we want to look at adding an endpoint to glare 
that

speaks glance v2 API so that users can have a transition period while
libraries and tools get updated to understand the artifacts API.



This is optional and depends on the project developers. For my part, I 
can

only offer the most compatible client, so that the Glance module can be
simply copied into the new Glare module.


Unfortunately, adding this sort of logic to the client is almost never 
the right
choice. To be completely honest, I'm not even convinced having a 
Glance-like API
in Glare is the right thing to do. As soon as that API hits the 
codebase, you'll

have to maintain it.

Anything that delays the transition to the new thing is providing a fake 
bridge

to the users. It's a bridge that will be blown-up eventually.

To make a hypothetical transition from Glance to Glare works smoothly, 
we should
first figure out how to migrate the database (assuming this has not been 
done
yet), how to migrate the images, etc. Only when these things have been 
figured
out, I'd start worrying about what compatibility layer we want to 
provide. The
answer could also be: "Hey, we're sorry but, the best thing you can do 
is to

migrate your code base as soon as possible".


I think this is a deal breaker. The problem is - if glare doesn't 
provide a v2 compat layer, then a deployer is going to have to run 
glance AND glare at the same time and we'll have to make sure both 
glance and glare can write to the same backend.


The reason is that with our major version bumps both versions co-exist 
for a period of time which allows consumers to gracefully start 
consuming the nicer and newer api while not being immediately broken 
when the old api isn't there.


What we'd be looking at is:

* a glare service that runs two endpoints - an /image endpoint and an 
/artifact endpoint - and that registers the /image endpoint with the 
catalog as the 'image' service_type and the /artifact endpoint with the 
catalog as the 'artifact' service_type followed by a deprecation period 
of the image endpoint from the bazillion things that use it and a 
migration to the artifact service.


OR

First - immediately bump the glare api version to 3.0. This is affect 
some glare users, but given the relative numbers of glance v. glare 
users, it may be the right choice.


Run a single set of versioned endpoints - no /v1, /v2 has /image at the 
root and /v3 has /artifact at the root. Register that endpoint with the 
catalog as both artifact and image.


That means service and version discovery will find the /v2 endpoint of 
the glare service if someone says "I want 'image' api 'v2'". It's 
already fair game for a cloud to run without v1 - so that's not a 
problem. (This, btw, is the reason glare has to bump its api to v3 - if 
it still had a v1 in its version discovery document, glance users would 
potentially find that but it would not be a v1 of the image API)


In both cases, /v2/images needs to be the same as glance /v2/images. If 
both are running side-by-side, which is how we normally do major version 
bumps, then client tools and libraries can use the normal version 
discovery process to discover that the cloud has the new /v3 version of 
the api with service-type of 'image', and they can decide if they want 
to use it or not.



Yes - this is going to provide a pile of suck for the glare team, 
because they're going to have to maintain an API mapping layer, and 
they're going to have to maintain it for a full glance v2 api 
deprecation period. Becaue glance v2 is in DefCore, that is longer than 
a normal deprecation period - but that's life.


The other choice is to tell our users "we don't care about you AT ALL 
and just yanked one of the most fundamental pieces of the core IaaS APIs 
out from under you YOLO"


There is work here - but it's just work - none of it is impossible. 
What's important is to keep our eyes on the fact that there are a TON of 
people using the glance API and we CANNOT screw them. We have service 
discovery, we have version discovery - they are used by the things that 
consume openstack apis - we can use them for this.




If projects use Glance without client, it means that some direct API

requests will need to be rewritten. But in any case, the number of
differences between Glance v1 and Glance v2 was much larger, and we
switched pretty smoothly. So I hope everything will be fine here, too.



v1 vs v2 is still 

Re: [openstack-dev] TR: [tricircle]

2017-07-11 Thread joehuang
Hi Meher,

Yes, as Victor pointed out, it should be done by the devstack script. But in 
our daily development, I use (maybe most of us) Ubuntu to install Tricircle 
through devstack, so not sure whether there is some bug under RHEL, and I have 
no RHEL distribution.

Best Regards
Chaoyi Huang (joehuang)

From: Morales, Victor [victor.mora...@intel.com]
Sent: 12 July 2017 0:13
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] TR: [tricircle]

Hi Meher,

I don’t think that you need to create those folders or at least that it’s shown 
in the devstack functions[1].

Regards/Saludos
Victor Morales

[1] https://github.com/openstack-dev/devstack/blob/master/lib/apache#L178-L192

From: "meher.h...@orange.com" 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday, July 11, 2017 at 7:51 AM
To: "openstack-dev@lists.openstack.org" 
Subject: [openstack-dev] TR: [tricircle]



[ogo Orange]

Meher Hihi
Intern
ORANGE/IMT/OLN/WTC/CMA/MAX
Fixe : +33 2 96 07 03 
71
Mobile : +33 7 58 38 68 
87
meher.h...@orange.com


De : HIHI Meher IMT/OLN
Envoyé : mardi 11 juillet 2017 14:50
À : HIHI Meher IMT/OLN
Objet : RE: [openstack-dev][tricircle]

Hi Zhiyuan,

Thank you for the response! So, in this case, I just need to create two 
"sites-available" and "sites-enabled" folders under /etc/ httpd and put in the 
config files found in /etc/httpd/conf.d/?

Regards,

Meher

[ogo Orange]

Meher Hihi
Intern
ORANGE/IMT/OLN/WTC/CMA/MAX
Fixe : +33 2 96 07 03 
71
Mobile : +33 7 58 38 68 
87
meher.h...@orange.com


De : HIHI Meher IMT/OLN
Envoyé : lundi 10 juillet 2017 16:10
À : 'openstack-dev@lists.openstack.org'
Objet : RE: [openstack-dev][tricircle]

Hello everybody,

I posted before a problem related to installing the tricircle on a single node, 
the script stopped with a keystone startup. You advised me to see the / etc / 
apache2 / sites-enabled folder to see if the keystone config files are 
included. But, I have not found this folder, yet the httpd service is properly 
installed, the name of this file changes according to the distribution? I use 
RHEL 7, thank you in advance!

Meher

[ogo Orange]

Meher Hihi
Intern
ORANGE/IMT/OLN/WTC/CMA/MAX
Fixe : +33 2 96 07 03 
71
Mobile : +33 7 58 38 68 
87
meher.h...@orange.com


De : HIHI Meher IMT/OLN
Envoyé : mercredi 28 juin 2017 15:12
À : 'openstack-dev@lists.openstack.org'
Objet : [openstack-dev][tricircle]

Hello everyone,

I introduce myself; Meher Hihi; I am doing my internship at Orange Labs 
Networks Lannion-France for the diploma of computer network and 
telecommunications engineer.

I am working on innovative distribution solutions for the virtualization 
infrastructure of the network functions and more specifically on the Openstack 
Tricircle solution, which is why I join your community to participate in your 
discussions and learn from your advice.

Indeed, I try to install Tricircle on a single node by following this 
documentation 
“https://docs.openstack.org/developer/tricircle/installation-guide.html#single-pod-installation-with-devstack”.
I managed to install Devstack without any problems, but when I modify the 
local.conf file by adding the Tricircle plugin integration and the HOST_IP, the 
script does not want to work and stops on an error of Start of the Keystone 
service.

I wanted to know if the problem is with my config file that is attached or I 
lack other things to configure. You will also find in the file the IP address 
of the machine.

I thank you in advance for the help you will bring me. 

Re: [openstack-dev] [TripleO] Forming our plans around Ansible

2017-07-11 Thread James Slagle
On Tue, Jul 11, 2017 at 5:53 PM, Steve Baker  wrote:
>
>
> On Tue, Jul 11, 2017 at 3:37 AM, Lars Kellogg-Stedman 
> wrote:
>>
>> On Fri, Jul 7, 2017 at 1:50 PM, James Slagle 
>> wrote:
>>>
>>> There are also some ideas forming around pulling the Ansible playbooks
>>>
>>> and vars out of Heat so that they can be rerun (or run initially)
>>> independently from the Heat SoftwareDeployment delivery mechanism:
>>
>>
>> I think the closer we can come to "the operator runs ansible-playbook to
>> configure the overcloud" the better, but not because I think Ansible is
>> inherently a great tool: rather, I think the many layers of indirection in
>> our existing model make error reporting and diagnosis much more complicated
>> that it needs to be.  Combined with Puppet's "fail as late as possible"
>> model, this means that (a) operators waste time waiting for a deployment
>> that is ultimately going to fail but hasn't yet, and (b) when it does fail,
>> they need relatively intimate knowledge of our deployment tools to backtrack
>> through logs and find the root cause of the failure.
>>
>> If we can offer a deployment mode that reduces the number of layers
>> between the operator and the actions being performed on the hosts I think we
>> would win on both fronts: faster failures and reporting errors as close as
>> possible to the actual problem will result in less frustration across the
>> board.
>>
>> I do like Steve's suggestion of a split model where Heat is responsible
>> for instantiating OpenStack resources while Ansible is used to perform host
>> configuration tasks.  Despite all the work done on Ansible's OpenStack
>> modules, they feel inflexible and frustrating to work with when compared to
>> Heat's state-aware, dependency ordered deployments.  A solution that allows
>> Heat to output configuration that can subsequently be consumed by Ansible --
>> either running manually or perhaps via Mistral for API-driven-deployments --
>> seems like an excellent goal.  Using Heat as a "front-end" to the process
>> means that we get to keep the parameter validation and documentation that is
>> missing in Ansible, while still following the Unix philosophy of giving you
>> enough rope to hang yourself if you really want it.
>
>
> I think this nicely sums up what we should be aiming for, but I'd like to
> elaborate on "either running manually or perhaps via Mistral for
> API-driven-deployments".
>
> I think its important that we allow full support for both mistral-driven and
> manually running playbooks. If there was no option to run ansible-playbook
> directly then operators would miss one of the main benefits of using ansible
> in the first place (which is leveraging their knowledge of inventory,
> playbooks and roles to deploy things).

+1, I like this idea as well. If you have a few minutes could you
summarize it here:
https://etherpad.openstack.org/p/tripleo-ptg-queens-ansible

I'm attempting to capture some of the common requirements from this
thread for discussion at the ptg so we can consider them when choosing
solution(s).

> I'm thinking specifically about upgrade scenarios where a step fails.
> Currently the only option is a manual diagnosis of the problem, manual
> modification of state, then re-running the entire stack update to see if it
> can get past the failing step.
>
> What would be nice is when a heat->mistral->ansible upgrade step fails, the
> operator is given an ansible-playbook command to run which skips directly to
> the failing step. This would dramatically reduce the debug cycle and also
> make it possible for the operator to automate any required fixes over every
> host in a role. This would likely mean rendering out ansible config files,
> playbooks, (and roles?) to the operator's working directory. What happens to
> these rendered files after deployment is an open question. Delete them?
> Encourage the operator to track them in source control?




-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Forming our plans around Ansible

2017-07-11 Thread James Slagle
On Tue, Jul 11, 2017 at 6:53 PM, Steve Baker  wrote:
>
>
> On Tue, Jul 11, 2017 at 6:51 AM, James Slagle 
> wrote:
>>
>> On Mon, Jul 10, 2017 at 11:37 AM, Lars Kellogg-Stedman 
>> wrote:
>> > On Fri, Jul 7, 2017 at 1:50 PM, James Slagle 
>> > wrote:
>> >>
>> >> There are also some ideas forming around pulling the Ansible playbooks
>> >>
>> >> and vars out of Heat so that they can be rerun (or run initially)
>> >> independently from the Heat SoftwareDeployment delivery mechanism:
>> >
>> >
>> > I think the closer we can come to "the operator runs ansible-playbook to
>> > configure the overcloud" the better, but not because I think Ansible is
>> > inherently a great tool: rather, I think the many layers of indirection
>> > in
>> > our existing model make error reporting and diagnosis much more
>> > complicated
>> > that it needs to be.  Combined with Puppet's "fail as late as possible"
>> > model, this means that (a) operators waste time waiting for a deployment
>> > that is ultimately going to fail but hasn't yet, and (b) when it does
>> > fail,
>> > they need relatively intimate knowledge of our deployment tools to
>> > backtrack
>> > through logs and find the root cause of the failure.
>> >
>> > If we can offer a deployment mode that reduces the number of layers
>> > between
>> > the operator and the actions being performed on the hosts I think we
>> > would
>> > win on both fronts: faster failures and reporting errors as close as
>> > possible to the actual problem will result in less frustration across
>> > the
>> > board.
>> >
>> > I do like Steve's suggestion of a split model where Heat is responsible
>> > for
>> > instantiating OpenStack resources while Ansible is used to perform host
>> > configuration tasks.  Despite all the work done on Ansible's OpenStack
>> > modules, they feel inflexible and frustrating to work with when compared
>> > to
>> > Heat's state-aware, dependency ordered deployments.  A solution that
>> > allows
>> > Heat to output configuration that can subsequently be consumed by
>> > Ansible --
>> > either running manually or perhaps via Mistral for
>> > API-driven-deployments --
>> > seems like an excellent goal.  Using Heat as a "front-end" to the
>> > process
>> > means that we get to keep the parameter validation and documentation
>> > that is
>> > missing in Ansible, while still following the Unix philosophy of giving
>> > you
>> > enough rope to hang yourself if you really want it.
>>
>> This is excellent input, thanks for providing it.
>>
>> I think it lends itself towards suggesting that we may like to persue
>> (again) adding native Ironic resources to Heat. If those were written
>> in a way that also addressed some of the feedback about TripleO and
>> the baremetal deployment side, then we could continue to get the
>> advantages from Heat that you mention.
>>
>> My personal opinion to date is that Ansible's os_ironic* modules are
>> superior in some ways to the Heat->Nova->Ironic model. However, just a
>> Heat->Ironic model may work in a way that has the advantages of both.
>
>
> I too would dearly like to get nova out of the picture. Our placement needs
> mean the scheduler is something we need to work around, and it discards
> basically all context for the operator when ironic can't deploy for some
> reason.
>
> Whether we use a mistral workflow[1], a heat resource, or ansible os_ironic,
> there will still need to be some python logic to build the config drive ISO
> that injects the ssh keys and os-collect-config bootstrap.
>
> Unfortunately ironic iPXE boot from iSCSI[2] doesn't support config-drive
> (still?) so the only option to inject ssh keys is the nova ec2-metadata
> service (or equivalent). I suspect if we can't make every ironic deployment
> method support config-drive then we're stuck with nova.
>
> I don't have a strong preference for a heat resource vs mistral vs ansible
> os_ironic, but given there is some python logic required anyway, I would
> lean towards a heat resource. If the resource is general enough we could
> propose it to heat upstream, otherwise we could carry it in tripleo-common.
>
> Alternatively, we can implement a config-drive builder in tripleo-common and
> invoke that from mistral or ansible.

Ironic's cli node-set-provision-state command has a --config-drive
option where you just point it a directory and it will automatically
bundle that dir into the config drive ISO format.

Ansible's os_ironic_node[1] also supports that via the config_drive
parameter. Combining that with a couple of template tasks to create
meta_data.json and user_data files makes for a very easy to user
interface.


[1] http://docs.ansible.com/ansible/os_ironic_node_module.html

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

[openstack-dev] [zun] sandbox and clearcontainers

2017-07-11 Thread Surya.Prabhakar
Dell - Internal Use - Confidential

Hi Folks,
I am just trying to wrap my head around zun's sandboxing and clear 
containers.   From what Hongbin told in Barcelona ( see the attached pic which 
I scrapped from his video)

[cid:image001.jpg@01D2FAA3.AE2FD7F0]

current implementation in Zun is, Sandbox is the outer container and the real 
user container is nested inside the sandbox.  I am trying to figure out how 
this is going to play out
when we have clear containers.

I envision the following scenarios:


1)  Scenario 1: where the sandbox itself is a clear container and user will 
nest another clear container inside the sandbox. This is like nested 
virtualization.

But I am not sure how this is going to work since the nested containers won't 
get VT-D cpu flags.

2)  Scenario 2: the outer sandbox is just going to be a standard docker 
container without vt-d and the inside container is going to be the real clear 
container with vt-d.  Now this

might work well but we might be losing the isolation features for the network 
and storage which lies open in the sandbox. Wont this defeat the whole purpose 
of using clear containers.

I am just wondering what is the thought process for this design inside zun.  If 
this is trivial and if I am missing something please shed some light :).

Thanks
Surya ( spn )
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 28

2017-07-11 Thread Jeremy Stanley
On 2017-07-11 23:25:40 +0100 (+0100), Chris Dent wrote:
[...]
> ## "big tent" and "hosted projects"
[...]
> Does anyone recall where this topic landed, or if it hasn't yet
> landed, does anyone have good ideas on how to get it to land?

My favorite responses gravitated toward (paraphrasing, and apologies
for not bothering to dig up citations) "let's shelve this and focus
on the TC and OpenStack visions first, then come back to it when we
can say which proposed solutions actually further those visions."
Monty also indicated he's piecing together a related proposal of
some sort, which I'll let him surprise everyone with when he gets it
into what he considers a presentable state.

Rather than waiting for discussion to conclude, we've already
started to do some things everyone agreed were low-cost and at worst
harmless: Sean pinned the "popular" (according to the Project
Navigator) repos in the main GH org and began manually adding topics
to them for some of the more useful governance tags
(follows-standard-deprecation and compute-starter-kit so far), while
Monty played around a little with some options for standardizing
repo descriptions there and I had a go at clarifying the org
descriptions. Infra also (I think) has achieved some consensus that
we could give up maintaining those GH mirrors in favor of handing
them over to someone who wants to handle them more like the social
media platform they are, performing selectively curated but still
near-real-time replication of repos with standalone automation.

Further, we've been actively looking into ways to filter or
otherwise reorganize the git.o.o site to make it easier to group
repositories by official governance or perhaps sliced across other
axes of metadata at our disposal (much simpler to automate than GH
since we have complete control over the platform serving that). I
believe it's also possible that we could do some symlink and/or
redirect tricks to make official clone and browse URLs stop
requiring namespaces even before we do something as thorough as
https://review.openstack.org/461878 , and intend to further
investigate our options there.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Forming our plans around Ansible

2017-07-11 Thread Steve Baker
On Tue, Jul 11, 2017 at 6:51 AM, James Slagle 
wrote:

> On Mon, Jul 10, 2017 at 11:37 AM, Lars Kellogg-Stedman 
> wrote:
> > On Fri, Jul 7, 2017 at 1:50 PM, James Slagle 
> wrote:
> >>
> >> There are also some ideas forming around pulling the Ansible playbooks
> >>
> >> and vars out of Heat so that they can be rerun (or run initially)
> >> independently from the Heat SoftwareDeployment delivery mechanism:
> >
> >
> > I think the closer we can come to "the operator runs ansible-playbook to
> > configure the overcloud" the better, but not because I think Ansible is
> > inherently a great tool: rather, I think the many layers of indirection
> in
> > our existing model make error reporting and diagnosis much more
> complicated
> > that it needs to be.  Combined with Puppet's "fail as late as possible"
> > model, this means that (a) operators waste time waiting for a deployment
> > that is ultimately going to fail but hasn't yet, and (b) when it does
> fail,
> > they need relatively intimate knowledge of our deployment tools to
> backtrack
> > through logs and find the root cause of the failure.
> >
> > If we can offer a deployment mode that reduces the number of layers
> between
> > the operator and the actions being performed on the hosts I think we
> would
> > win on both fronts: faster failures and reporting errors as close as
> > possible to the actual problem will result in less frustration across the
> > board.
> >
> > I do like Steve's suggestion of a split model where Heat is responsible
> for
> > instantiating OpenStack resources while Ansible is used to perform host
> > configuration tasks.  Despite all the work done on Ansible's OpenStack
> > modules, they feel inflexible and frustrating to work with when compared
> to
> > Heat's state-aware, dependency ordered deployments.  A solution that
> allows
> > Heat to output configuration that can subsequently be consumed by
> Ansible --
> > either running manually or perhaps via Mistral for
> API-driven-deployments --
> > seems like an excellent goal.  Using Heat as a "front-end" to the process
> > means that we get to keep the parameter validation and documentation
> that is
> > missing in Ansible, while still following the Unix philosophy of giving
> you
> > enough rope to hang yourself if you really want it.
>
> This is excellent input, thanks for providing it.
>
> I think it lends itself towards suggesting that we may like to persue
> (again) adding native Ironic resources to Heat. If those were written
> in a way that also addressed some of the feedback about TripleO and
> the baremetal deployment side, then we could continue to get the
> advantages from Heat that you mention.
>
> My personal opinion to date is that Ansible's os_ironic* modules are
> superior in some ways to the Heat->Nova->Ironic model. However, just a
> Heat->Ironic model may work in a way that has the advantages of both.
>

I too would dearly like to get nova out of the picture. Our placement needs
mean the scheduler is something we need to work around, and it discards
basically all context for the operator when ironic can't deploy for some
reason.

Whether we use a mistral workflow[1], a heat resource, or ansible
os_ironic, there will still need to be some python logic to build the
config drive ISO that injects the ssh keys and os-collect-config bootstrap.

Unfortunately ironic iPXE boot from iSCSI[2] doesn't support config-drive
(still?) so the only option to inject ssh keys is the nova ec2-metadata
service (or equivalent). I suspect if we can't make every ironic deployment
method support config-drive then we're stuck with nova.

I don't have a strong preference for a heat resource vs mistral vs ansible
os_ironic, but given there is some python logic required anyway, I would
lean towards a heat resource. If the resource is general enough we could
propose it to heat upstream, otherwise we could carry it in tripleo-common.

Alternatively, we can implement a config-drive builder in tripleo-common
and invoke that from mistral or ansible.

[1] https://review.openstack.org/#/c/313048/1
[2] http://specs.openstack.org/openstack/ironic-specs/
specs/approved/boot-from-volume-reference-drivers.html#
scenario-1-ipxe-boot-from-iscsi-volume
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] bug triage experimentation

2017-07-11 Thread Jeremy Stanley
On 2017-07-10 07:33:28 -0400 (-0400), Sean Dague wrote:
[...]
> Ideally storyboard would just be a lot more receptive to these kinds of
> things, by emitting a more native event stream,

Well, there is
http://git.openstack.org/cgit/openstack-infra/storyboard/tree/storyboard/notifications/publisher.py
 >
so replacing or partnering its RabbitMQ publisher with something
like an MQTT publisher into firehose.openstack.org is probably not
terribly hard for someone with interest in that and would be
generally useful.

> and having really good tag support (preferably actually project
> scoped tags, so setting it on the nova task doesn't impact the
> neutron tasks on the same story, as an for instance)
[...]

Your queries (including those used to build automatic tasklists and
boards) could just include project in addition to tag, right? Or is
this more of a UI concern, being able to click on an arbitrary tag
in the webclient and only get back a set of tagged stories for the
same project rather than across all projects?
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] [all] TC Report 28

2017-07-11 Thread Chris Dent


Blog version at: https://anticdent.org/tc-report-28.html

It's been a while since I've done one of these. I was between employers and
taking a break but am now back in the groove. There's been some activity in the
interim, which I'll try to summarize below, but first notes from this evening's
meeting.

I'm not fully up to speed yet, so this may be a bit sparse. It will pick up.

# Meeting

A meeting was
[declared](http://lists.openstack.org/pipermail/openstack-dev/2017-July/119378.html)
to "discuss the next steps in establishing the vision" and "wrap up the goals".
[Meeting notes and
log](http://eavesdrop.openstack.org/meetings/tc/2017/tc.2017-07-11-20.01.html).

## Goals

The first topic was moving forward on the two community goals for the Queens
cycle. There was no disagreement, especially as "champions" have stepped
forward to shepherd all of the projects on the goals. The chosen goals are:

* [policy and docs in code](https://review.openstack.org/#/c/469954/)
* [tempest plugins separate repos](https://review.openstack.org/#/c/369749/)

## Vision Next Step

Again, mostly violent agreement on what to do: Accept the revised vision and
see how it goes. To ensure that the document is effectively responsive to any
necessary adjustments over time, it is being moved from reference to
resolution. There's a stack of four changes starting with the [original
draft](https://review.openstack.org/#/c/453262/).

The vision itself has been slightly adjusted to be a bit more amenable to
skimming and make the overarching goals a bit more obvious.

## Other Meeting Stuff

### How's Office Hours Going?

Mixed. But worth continuing the experiment. The hope is that [office
hours](https://governance.openstack.org/tc/#office-hours) provide a reliable
but casual way to interact with members of the TC. Thus far they have mostly
been the TC talking amongst themselves, but several attendees at tonight's
meeting reported that though they don't speak much in office hours, they do
read. I'd personally like to see a lot more participation from anyone and
everyone.

Some people would like to change the schedule a bit, as one of the three slots
is a lot more popular than others and the least popular is very unpopular. The
reaction? "patches accepted".

That 01:00 UTC Wednesday slot is designed to allow some interaction with people
in the APAC region, with a long term goal of establishing future leaders in
OpenStack from that region.

What's clear is that when there are some people there, a conversation happens
containing relevant discussion. There's always something to talk about. For
example though there were two agenda items in this meeting, new topics kept
coming up.

### The Diversity Report

Bitergia produced a report on [gender diversity in
OpenStack](http://superuser.openstack.org/articles/bitergia-intel-report/),
though their data has some issues the general conclusion (we could do a _lot_
better) stands.

### Glare and Glance Compatibility

In [office hours earlier today](http://p.anticdent.org/JIh) there was some
discussion about Glare's application to be an official project. This came up
again in tonight's meeting and there is also a [long
thread](http://lists.openstack.org/pipermail/openstack-dev/2017-July/119442.html)
on openstack-dev. There are concerns about overlap with Glance. If the overlap
is such that an exactly concurrent API could be provided, this is potentially a
very good thing. However, if the overlap is almost-but-not-quite then that
could present problems.  The mailing list thread has more information.

# Pending Stuff

## "big tent" and "hosted projects"

Two long email threads

* [Moving away from "big tent"
  
terminology](http://lists.openstack.org/pipermail/openstack-dev/2017-June/118368.html)
* [How to deal with confusion around "hosted
  
projects"](http://lists.openstack.org/pipermail/openstack-dev/2017-June/119043.html)

covered a lot of ground trying to work on the topic "better communicating what
is OpenStack". The "big tent" term is misunderstood and misused and the
difference between an "official" (subject to TC governance) project and one
that just happens to use OpenStack infra is also misunderstood, but sometimes
manipulated for gain.

While it was decided to straighforwardly [purge "big
tent"](https://review.openstack.org/#/c/480500/) in the governance repository
the discussion about hosted projects went very broad (the OpenStack adaptation
of Godwin's law is that any discussion will eventually generalize to "What is
OpenStack?") with some consideration of no longer allowing use of some
combination of

* the `openstack` prefix in git repositories
* openstack infrastructure in general

to just anyone who comes along. It's not clear how this was resolved, if at
all.  There was an impassioned [plea to fix the real
problem(s)](http://lists.openstack.org/pipermail/openstack-dev/2017-June/119075.html)
instead of limiting open access for people who want to create more openness.

Does anyone 

Re: [openstack-dev] [TripleO] Forming our plans around Ansible

2017-07-11 Thread Steve Baker
On Tue, Jul 11, 2017 at 3:37 AM, Lars Kellogg-Stedman 
wrote:

> On Fri, Jul 7, 2017 at 1:50 PM, James Slagle 
> wrote:
>
>> There are also some ideas forming around pulling the Ansible playbooks
>>
> and vars out of Heat so that they can be rerun (or run initially)
>> independently from the Heat SoftwareDeployment delivery mechanism:
>>
>
> I think the closer we can come to "the operator runs ansible-playbook to
> configure the overcloud" the better, but not because I think Ansible is
> inherently a great tool: rather, I think the many layers of indirection in
> our existing model make error reporting and diagnosis much more complicated
> that it needs to be.  Combined with Puppet's "fail as late as possible"
> model, this means that (a) operators waste time waiting for a deployment
> that is ultimately going to fail but hasn't yet, and (b) when it does fail,
> they need relatively intimate knowledge of our deployment tools to
> backtrack through logs and find the root cause of the failure.
>
> If we can offer a deployment mode that reduces the number of layers
> between the operator and the actions being performed on the hosts I think
> we would win on both fronts: faster failures and reporting errors as close
> as possible to the actual problem will result in less frustration across
> the board.
>
> I do like Steve's suggestion of a split model where Heat is responsible
> for instantiating OpenStack resources while Ansible is used to perform host
> configuration tasks.  Despite all the work done on Ansible's OpenStack
> modules, they feel inflexible and frustrating to work with when compared to
> Heat's state-aware, dependency ordered deployments.  A solution that allows
> Heat to output configuration that can subsequently be consumed by Ansible
> -- either running manually or perhaps via Mistral for
> API-driven-deployments -- seems like an excellent goal.  Using Heat as a
> "front-end" to the process means that we get to keep the parameter
> validation and documentation that is missing in Ansible, while still
> following the Unix philosophy of giving you enough rope to hang yourself if
> you really want it.
>

I think this nicely sums up what we should be aiming for, but I'd like to
elaborate on "either running manually or perhaps via Mistral for
API-driven-deployments".

I think its important that we allow full support for both mistral-driven
and manually running playbooks. If there was no option to run
ansible-playbook directly then operators would miss one of the main
benefits of using ansible in the first place (which is leveraging their
knowledge of inventory, playbooks and roles to deploy things).

I'm thinking specifically about upgrade scenarios where a step fails.
Currently the only option is a manual diagnosis of the problem, manual
modification of state, then re-running the entire stack update to see if it
can get past the failing step.

What would be nice is when a heat->mistral->ansible upgrade step fails, the
operator is given an ansible-playbook command to run which skips directly
to the failing step. This would dramatically reduce the debug cycle and
also make it possible for the operator to automate any required fixes over
every host in a role. This would likely mean rendering out ansible config
files, playbooks, (and roles?) to the operator's working directory. What
happens to these rendered files after deployment is an open question.
Delete them? Encourage the operator to track them in source control?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][ironic][telemetry][dragonflow][freezer][kuryr][manila][mistral][monasca][neutron][ansible][congress][rally][senlin][storlets][zun][docs] repos without signs of migration sta

2017-07-11 Thread Eric Fried
> can't speak regarding ceilometer-powervm since it's vendor specific.

I can confirm that ceilometer-powervm (and nova-powervm and
networking-powervm) shouldn't be tracked with this effort, since they
publish to readthedocs and not docs.openstack.org.

Something something Big Tent something Governance something something
Official.

efried
.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][docs] using openstackdocstheme

2017-07-11 Thread Doug Hellmann
As projects review patches related to the migration project, please
keep in mind that only *official* projects listed as being under
TC governance [1] should use the openstackdocs theme. If your
documentation is not being published to docs.openstack.org, please
use a different theme.

Doug

[1] https://governance.openstack.org/tc/reference/projects/index.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [octavia] Webex to discuss Octavia L3 active/active spec Part II

2017-07-11 Thread Jason Niesz
This is part II of the discussion around the L3 active/active spec 
(https://review.openstack.org/#/c/453005/).  The goal of this meeting is to get 
community feedback in order to finalize the details around the L3 active/active 
spec.  I have posted the meeting details below.

Date & Time: July 14th 2017 @ 11 AM PST

Webex Details:
Link: 
https://walmart.webex.com/walmart/j.php?MTID=m2fb12ebb0fd25d5454c99bea39fc5ec7
Dial-in: +1-855-797-9485
Meeting Number: 744 800 637
Meeting Password: 1

Thanks,

Jason Niesz
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] bug triage experimentation

2017-07-11 Thread James E. Blair
Sean Dague  writes:

> On 07/05/2017 03:23 PM, Emilien Macchi wrote:
> 
>> 
>> I also believe that some of the scripts could be transformed into
>> native features of Storyboard where bugs could be auto-triaged
>> periodically without human intervention.
>> Maybe it would convince more OpenStack projects to leave Launchpad and
>> adopt Storyboard?
>> I would certainly one of those and propose such a change for TripleO &
>> related projects.
>
> Maybe... my concern there is that workflow encoded into trackers is
> pretty static, and it's hard to evolve, because it impacts all users of
> that platform. Where as a script that processes bugs externally can
> adapt really quickly based on what's working / not working with a
> particular team. There is no 1 right way to handle bugs, it's just about
> making every bug handling team the most effective that they can be.
> Which means I assume that different teams would find different parts of
> this useful, and other parts things they wouldn't want to use at all.
> That's why I tried to make every "processing unit" it's own cli.
>
> Ideally storyboard would just be a lot more receptive to these kinds of
> things, by emitting a more native event stream, and having really good
> tag support (preferably actually project scoped tags, so setting it on
> the nova task doesn't impact the neutron tasks on the same story, as an
> for instance) so the hack we need to do on LP isn't needed. But,
> actually, beyond that, keeping the processing logic team specific is a
> good thing. It's much like the fact that we've largely done gerrit
> review dashboards client side, because they are fast to iterate on, then
> server side.

I agree.  I think being able to add things to Storyboard is great, and
as we've been using it more, we've done some of that.  But we've also
run into places where we found that we needed Storyboard to do some
things that were ultimately project-specific workflows.  So I think long
term we're going to have both things -- adding features that make sense
globally as well as ones that facilitate local configuration and
workflows.

As an example, the "board" feature on storyboard can be really useful,
but we wanted to automate some of the movement between lanes.  Lanes are
arbitrary.  Rather than writing a new processing language to describe
that and incorporating that into Storyboard, we wrote a script to manage
one specific board using the Storyboard API.

The board is here: https://storyboard.openstack.org/#!/board/41

The script is here: 
http://git.openstack.org/cgit/openstack-infra/zuul/tree/tools/update-storyboard.py?h=feature/zuulv3

(Basically, that script automatically moves tasks between lanes based on
status according to the map defined on line 65, while still allowing
folks to manually move tasks between certain classes of lanes -- so a
task marked as 'todo' can be in either the 'New', 'Backlog', or 'Todo'
lanes.)

I'm imagining a future where we have lots of scripts like that (or maybe
a few framework scripts like Sean's, with configuration), and we run
those scripts in Infra but projects are responsible for their own
configuration.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][ironic][telemetry][dragonflow][freezer][kuryr][manila][mistral][monasca][neutron][ansible][congress][rally][senlin][storlets][zun][docs] repos without signs of migration sta

2017-07-11 Thread gordon chung


On 11/07/17 10:43 AM, Doug Hellmann wrote:
> Excerpts from gordon chung's message of 2017-07-11 14:23:41 +:
>>
>> On 10/07/17 01:26 PM, Doug Hellmann wrote:
>>> openstack/ceilometer-powervm
>>> openstack/ceilometermiddleware
>>
>> i don't believe there are docs for these. ceilometermiddleware is a
>> simple wsgi middleware and it's usage is part of ceilometer's install
>> docs. ceilometer-powervm contains the powervm driver for ceilometer's
>> polling agent.
>
> I've removed them from the tracking list for now, but it seems like
> both are likely to have contributor documentation, at least, and
> the driver would likely have installation and configuration docs,
> right?

can't speak regarding ceilometer-powervm since it's vendor specific. 
ceilometermiddleware is admittedly purely maintenance mode for last few 
cycles so i'm going to say realistically, it's very very low priority 
for anyone to actually follow through with contributor docs for it.

>
>>
>> i missed this but how do we handle smaller add-on type repos like this?
>> i imagine we want to keep docs grouped by project so they are not
>> scattered across the same level.
>>
>> cheers,
>>
>
> The new "rule" for docs is "The documentation for something should
> live in the same repository as the code."
>
> Hyperlinks are easy and free, so we can use those to ensure that the
> results are easy to find. We're building up a nice set of landing pages
> for "all of the admin guides" and "all of the configuration references"
> and so on within the openstack-manuals repository.
>

ok, sounds good to me.

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ci] decreased coverage for telemetry

2017-07-11 Thread Pradeep Kilambi
On Tue, Jul 11, 2017 at 3:17 PM, Wesley Hayutin  wrote:
> Greetings,
>
> I was looking through the mailing list and I did not see any emails
> explicitly calling out the decreased coverage for telemetry in tripleo due
> to [1].  A series of changes went into the CI system to disable telemetry
> [2].
>
> There is work being done to restore more coverage for telemetry by limiting
> the resources it consumes [3].  We are also working on additional scenarios
> in t-h-t/ci/environments/ to better cover ceilometer.
>
> If the CI environment you are working in has the resources to cover
> ceilometer that is great, however if you find issues like [1] we highly
> suggest you follow the same pattern until coverage is restored upstream.
>
> Thank you!
>
> [1] https://bugs.launchpad.net/tripleo/+bug/1693174
> [2] https://review.openstack.org/#/q/topic:bug/1680195
> [3]
> https://review.openstack.org/#/c/475838/
> https://review.openstack.org/#/c/474969/
> https://review.openstack.org/#/c/47/
>
>

Thanks for starting this thread Wes. I concur with this. We got bitten
recently by many issues that we could have caught in ci had telemetry
been enabled. I spoke to trown and Emilien about this a few times
already. I do understand the resource footprint it causes.  But with
recent improvements and changes upstream, things should be back to
being more manageable. We do have telemetry tested in scenario001 job,
but that doesn't cover all scenarios. So there is a gap in coverage.

 I hope we can either re-enable these services by default in CI and
how things work or at least add a separate gate job to be able to test
HA scenario properly with telemetry enabled.

-- 
Cheers,
~ Prad

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Status update, July 7th

2017-07-11 Thread Sean Dague

On 07/11/2017 12:14 PM, Dean Troyer wrote:

On Fri, Jul 7, 2017 at 3:19 AM, Thierry Carrez  wrote:

== Need for a TC meeting next Tuesday ==

[...]

others). Who is up for discussing those items at our usual meeting slot
time on Tuesday ?


I am unlikely to make the meeting, travel plans are more fluid than I
would like today.  Will be there if possible.


I will also be unable to attend today due to being in a metal tube at 
the time of the meeting.


-Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] deprecating and removing tools/sample_data.sh

2017-07-11 Thread Lance Bragstad
Good point. I did a bit more digging and it looks like it was originally
intended for devstack [0]. At least based on the original commit message
that introduced the file. Devstack seems to take its own approach to
generating sample data, mainly using keystone-manage and functions
defined in lib/keystone [1].

I'll propose a patch to remove it and we can continue the discussion in
Gerrit.

Thanks!


[0]
https://github.com/openstack/keystone/commit/09a64dd862463fe116c4ddb8aee538e4bc7f56e0
[1]
https://github.com/openstack-dev/devstack/blob/e4b2e3b93e892df3cb4be778bcd9813cf17f9a1c/lib/keystone#L331


On 07/05/2017 04:28 PM, Colleen Murphy wrote:
> On Wed, Jul 5, 2017 at 9:36 PM, Lance Bragstad  > wrote:
>
> Hi all,
>
> Keystone has a script to perform some bootstrapping operations
> [0]. It's
> not really tested and its purpose has been superseded by using the
> `keystone-manage bootstrap` command. Based on codesearch, only
> openstack/rpm-packaging references the script [1].
>
> It's not exactly superceded by `keystone-manage bootstrap` - in fact
> it uses bootstrap as part of its data generation:
>
> https://github.com/openstack/keystone/blob/82f60fe22c405829f8e5f6576f25cf3663b10f73/tools/sample_data.sh#L97
>
>
>
> Is anyone opposed to the removal of this script in favor of more
> supported and tested bootstrapping methods?
>
> I haven't used this script in a while but I have found value in it in
> the past. It would be great if it or something like it was gate tested.
>
> Colleen 
>
>
> Thanks,
>
>
> [0]
> 
> https://github.com/openstack/keystone/blob/82f60fe22c405829f8e5f6576f25cf3663b10f73/tools/sample_data.sh
> 
> 
> [1]
> http://codesearch.openstack.org/?q=sample_data.sh=nope==
> 
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][ci] decreased coverage for telemetry

2017-07-11 Thread Wesley Hayutin
Greetings,

I was looking through the mailing list and I did not see any emails
explicitly calling out the decreased coverage for telemetry in tripleo due
to [1].  A series of changes went into the CI system to disable telemetry
[2].

There is work being done to restore more coverage for telemetry by limiting
the resources it consumes [3].  We are also working on additional scenarios
in t-h-t/ci/environments/ to better cover ceilometer.

If the CI environment you are working in has the resources to cover
ceilometer that is great, however if you find issues like [1] we highly
suggest you follow the same pattern until coverage is restored upstream.

Thank you!

[1] https://bugs.launchpad.net/tripleo/+bug/1693174
[2] https://review.openstack.org/#/q/topic:bug/1680195
[3]
https://review.openstack.org/#/c/475838/
https://review.openstack.org/#/c/474969/
https://review.openstack.org/#/c/47/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zun] Some general ZUN use case / drivers type questions

2017-07-11 Thread Hongbin Lu
Greg,

No, it isn’t. We are working hard to integrate with Cinder (either via Fuxi or 
direct integration). Perhaps this design spec can provide some information 
about where we are heading to: https://review.openstack.org/#/c/468658/ .

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-11-17 2:13 PM
To: Hongbin Lu; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Thanks Hongbin,

another quick question,
is ZUN integrated with FUXI for Container mounting of Cinder Volumes yet ?

( my guess is no ... don’t see any options for that in the zun cli for create 
or run )

Greg.

From: Hongbin Lu >
Date: Tuesday, July 11, 2017 at 2:04 PM
To: Greg Waines >, 
"openstack-dev@lists.openstack.org" 
>
Subject: RE: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hi Greg,

There is no such API in Zun. I created a BP for this feature request: 
https://blueprints.launchpad.net/zun/+spec/show-container-engine-info . 
Hopefully, the implementation will be available at the next release or two.

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-11-17 10:24 AM
To: Hongbin Lu; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hey Hongbin,

is there a way to display ZUN’s resource usage ?
i.e. analogous to nova’s “nova hypervisor-show ”
e.g. memory usages, cpu usage, etc .

Greg.


From: Hongbin Lu >
Date: Friday, July 7, 2017 at 2:08 PM
To: Greg Waines >, 
"openstack-dev@lists.openstack.org" 
>
Subject: RE: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hi Greg,

Zun currently leverages the “--memory", “--cpu-period”, and “--cpu-quota” 
options to limit the CPU and memory. Zun does do resource tracking and 
scheduling right now, but this is temporary. The long-term plan is to switch to 
the Placement API [1] after it is spited out from Nova.

[1] https://docs.openstack.org/nova/latest/placement.html

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-07-17 11:00 AM
To: Hongbin Lu; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Thanks Hongbin.

I’ve got zun setup in devstack now, so will play with it a bit to better 
understand.

Although a couple more questions (sorry)

• in the current zun implementation of containers directly on compute 
nodes,
does zun leverage any of the docker capabilities to restrict the amount of 
resources used by a container ?
e.g. the amount and which cpu cores the container’s processes are allowed to 
use,
 how much memory the container is allowed to access/use, etc.

e.g. see https://docs.docker.com/engine/admin/resource_constraints/

• and then,
I know you mentioned that the assumption is that there are separate 
availability zones for zun and nova.

obut does zun do Resource Tracking and Scheduling based on that Resource 
Tracking for the nodes its using ?

Greg.


From: Hongbin Lu >
Date: Friday, July 7, 2017 at 10:42 AM
To: Greg Waines >, 
"openstack-dev@lists.openstack.org" 
>
Subject: RE: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hi Greg,

Sorry for the confusion. I used the term “container-in-baremetal” to refer to a 
deployment pattern that containers are running on physical compute nodes (not 
an instance provided by Nova/Ironic). I think your second interpretation is 
right if “OpenStack Hosts” means a compute node. I think a diagram [1] could 
explain the current deployment scenario better.

For the container-in-coe scenario, it is out of the current focus but the team 
is exploring ideas on it. I don’t have specific answers for the two questions 
you raised but I encourage you to bring up your use cases to the team and keep 
the discussion open.

[1] https://www.slideshare.net/hongbin034/clipboards/zun-deployment

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-07-17 7:05 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: 

Re: [openstack-dev] [oslo] scheduling oslosphinx for retirement at the start of queens

2017-07-11 Thread Ben Nemec

+1

On 07/10/2017 08:10 AM, Doug Hellmann wrote:

Oslo team,

With all documentation now moving to use the openstackdocs theme,
I propose that we retire the oslosphinx repository during Queens.
We should go ahead and create the stable/pike branch at the end of
this cycle, so that we have a way to deal with bugs in existing
pike releases, but I think we can retire the repository at any point
after that.

Thoughts?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zun] Some general ZUN use case / drivers type questions

2017-07-11 Thread Waines, Greg
Thanks Hongbin,

another quick question,
is ZUN integrated with FUXI for Container mounting of Cinder Volumes yet ?

( my guess is no ... don’t see any options for that in the zun cli for create 
or run )

Greg.

From: Hongbin Lu 
Date: Tuesday, July 11, 2017 at 2:04 PM
To: Greg Waines , 
"openstack-dev@lists.openstack.org" 
Subject: RE: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hi Greg,

There is no such API in Zun. I created a BP for this feature request: 
https://blueprints.launchpad.net/zun/+spec/show-container-engine-info . 
Hopefully, the implementation will be available at the next release or two.

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-11-17 10:24 AM
To: Hongbin Lu; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hey Hongbin,

is there a way to display ZUN’s resource usage ?
i.e. analogous to nova’s “nova hypervisor-show ”
e.g. memory usages, cpu usage, etc .

Greg.


From: Hongbin Lu >
Date: Friday, July 7, 2017 at 2:08 PM
To: Greg Waines >, 
"openstack-dev@lists.openstack.org" 
>
Subject: RE: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hi Greg,

Zun currently leverages the “--memory", “--cpu-period”, and “--cpu-quota” 
options to limit the CPU and memory. Zun does do resource tracking and 
scheduling right now, but this is temporary. The long-term plan is to switch to 
the Placement API [1] after it is spited out from Nova.

[1] https://docs.openstack.org/nova/latest/placement.html

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-07-17 11:00 AM
To: Hongbin Lu; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Thanks Hongbin.

I’ve got zun setup in devstack now, so will play with it a bit to better 
understand.

Although a couple more questions (sorry)

• in the current zun implementation of containers directly on compute 
nodes,
does zun leverage any of the docker capabilities to restrict the amount of 
resources used by a container ?
e.g. the amount and which cpu cores the container’s processes are allowed to 
use,
 how much memory the container is allowed to access/use, etc.

e.g. see https://docs.docker.com/engine/admin/resource_constraints/

• and then,
I know you mentioned that the assumption is that there are separate 
availability zones for zun and nova.

obut does zun do Resource Tracking and Scheduling based on that Resource 
Tracking for the nodes its using ?

Greg.


From: Hongbin Lu >
Date: Friday, July 7, 2017 at 10:42 AM
To: Greg Waines >, 
"openstack-dev@lists.openstack.org" 
>
Subject: RE: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hi Greg,

Sorry for the confusion. I used the term “container-in-baremetal” to refer to a 
deployment pattern that containers are running on physical compute nodes (not 
an instance provided by Nova/Ironic). I think your second interpretation is 
right if “OpenStack Hosts” means a compute node. I think a diagram [1] could 
explain the current deployment scenario better.

For the container-in-coe scenario, it is out of the current focus but the team 
is exploring ideas on it. I don’t have specific answers for the two questions 
you raised but I encourage you to bring up your use cases to the team and keep 
the discussion open.

[1] https://www.slideshare.net/hongbin034/clipboards/zun-deployment

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-07-17 7:05 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Hongbin Lu
Subject: Re: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hongbin,
Thanks for the responses.
A couple of follow up, clarifying questions ...


• You mentioned that currently Zun supports the container-in-baremetal 
scenario

ois this done by leveraging Ironic baremetal service ?

•  e.g. does Zun launch an Ironic baremetal instance (running docker) in order 
to host containers being launched by Zun ?

oOR

odo you must mean that, in this scenario, OpenStack Hosts are 
deployed with docker software,
and Zun expects docker to be running on each OpenStack Host, in order to launch 
its 

Re: [openstack-dev] [heat] Online video meet up this week (topic:review)

2017-07-11 Thread Jeremy Stanley
On 2017-07-12 01:47:02 +0800 (+0800), Rico Lin wrote:
[...]
> we will make our meeting this week as an online video meeting
[...]

Friendly reminder: "If the project has meetings [...] they should be
public and in IRC. They should all be logged and published"
https://governance.openstack.org/tc/reference/new-projects-requirements.html

Also, while Zoom's service and client software may be "free" in the
gratis sense, they are not free in the libre sense. Moving your
meetings to a proprietary system (whether it charges money for you
to be able to use it or not) isn't in the spirit of an open
community and necessarily excludes participation by people who value
software freedom.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Migration from Neutron ML2OVS to OVN

2017-07-11 Thread Ben Nemec



On 07/11/2017 10:17 AM, Numan Siddique wrote:

Hello Tripleo team,

I have few questios regarding migration from neutron ML2OVS to OVN. 
Below are some of the requirements


  - We want to migrate an existing depoyment from Neutroon default 
ML2OVS to OVN

  - We are targetting this for tripleo Queen's release.
  - The plan is to first upgrade the tripleo deployment from Pike to 
Queens with no changes to neutron. i.e with neutron ML2OVS. Once the 
upgrade is done, we want to migrate to OVN.
  - The migration process will stop all the neutron agents, configure 
neutron server to load OVN mechanism driver and start OVN services (with 
no or very limited datapath downtime).
  - The migration would be handled by an ansible script. We have a PoC 
ansible script which can be found here [1]


And the questions are
-  (A broad question) - What is the right way to migrate and switch the 
neutron plugin ? Can the stack upgrade handle the migration as well ?
- The migration procedure should be part of tripleo ? or can it be a 
standalone ansible script ? (I presume it should be former).
- If it should be part of the tripleo then what would be the command to 
do it ? A update stack command with appropriate environment files for OVN ?
- In case the migration can be done  as a standalone script, how to 
handle later updates/upgrades since tripleo wouldn't be aware of the 
migration ?


This last point seems like the crux of the discussion here.  Sure, you 
can do all kinds of things to your cloud using standalone bits, but if 
any of them affect things tripleo manages (which this would) then you're 
going to break on the next stack update.


If there are things about the migration that a stack-update can't 
handle, then the migration process would need to be twofold: 1) Run the 
standalone bits to do the migration 2) Update the tripleo configuration 
to match the migrated config so stack-updates work.


This is obviously a complex and error-prone process, so I'd strongly 
encourage doing it in a tripleo-native fashion instead if at all possible.





Request to provide your comments so that we can move in the right direction.

[1] - https://github.com/openstack/networking-ovn/tree/master/migration

Thanks
Numan



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zun] Some general ZUN use case / drivers type questions

2017-07-11 Thread Hongbin Lu
Hi Greg,

There is no such API in Zun. I created a BP for this feature request: 
https://blueprints.launchpad.net/zun/+spec/show-container-engine-info . 
Hopefully, the implementation will be available at the next release or two.

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-11-17 10:24 AM
To: Hongbin Lu; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hey Hongbin,

is there a way to display ZUN’s resource usage ?
i.e. analogous to nova’s “nova hypervisor-show ”
e.g. memory usages, cpu usage, etc .

Greg.


From: Hongbin Lu >
Date: Friday, July 7, 2017 at 2:08 PM
To: Greg Waines >, 
"openstack-dev@lists.openstack.org" 
>
Subject: RE: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hi Greg,

Zun currently leverages the “--memory", “--cpu-period”, and “--cpu-quota” 
options to limit the CPU and memory. Zun does do resource tracking and 
scheduling right now, but this is temporary. The long-term plan is to switch to 
the Placement API [1] after it is spited out from Nova.

[1] https://docs.openstack.org/nova/latest/placement.html

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-07-17 11:00 AM
To: Hongbin Lu; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Thanks Hongbin.

I’ve got zun setup in devstack now, so will play with it a bit to better 
understand.

Although a couple more questions (sorry)

• in the current zun implementation of containers directly on compute 
nodes,
does zun leverage any of the docker capabilities to restrict the amount of 
resources used by a container ?
e.g. the amount and which cpu cores the container’s processes are allowed to 
use,
 how much memory the container is allowed to access/use, etc.

e.g. see https://docs.docker.com/engine/admin/resource_constraints/

• and then,
I know you mentioned that the assumption is that there are separate 
availability zones for zun and nova.

obut does zun do Resource Tracking and Scheduling based on that Resource 
Tracking for the nodes its using ?

Greg.


From: Hongbin Lu >
Date: Friday, July 7, 2017 at 10:42 AM
To: Greg Waines >, 
"openstack-dev@lists.openstack.org" 
>
Subject: RE: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hi Greg,

Sorry for the confusion. I used the term “container-in-baremetal” to refer to a 
deployment pattern that containers are running on physical compute nodes (not 
an instance provided by Nova/Ironic). I think your second interpretation is 
right if “OpenStack Hosts” means a compute node. I think a diagram [1] could 
explain the current deployment scenario better.

For the container-in-coe scenario, it is out of the current focus but the team 
is exploring ideas on it. I don’t have specific answers for the two questions 
you raised but I encourage you to bring up your use cases to the team and keep 
the discussion open.

[1] https://www.slideshare.net/hongbin034/clipboards/zun-deployment

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-07-17 7:05 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Hongbin Lu
Subject: Re: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hongbin,
Thanks for the responses.
A couple of follow up, clarifying questions ...


• You mentioned that currently Zun supports the container-in-baremetal 
scenario

ois this done by leveraging Ironic baremetal service ?

•  e.g. does Zun launch an Ironic baremetal instance (running docker) in order 
to host containers being launched by Zun ?

oOR

odo you must mean that, in this scenario, OpenStack Hosts are 
deployed with docker software,
and Zun expects docker to be running on each OpenStack Host, in order to launch 
its containers ?


• In the future, when Zun supports the container-in-coe scenario

ois the idea that the COE (Kubernetes or Swarm) will abstract from Zun 
whether the COE’s minion nodes are OpenStack VMs or OpenStack Baremetal 
Instances (or OpenStack Hosts) ?

ois the idea that Magnum will support launching COEs with VM minion nodes 
and/or Baremetal minion nodes ?


Greg.


From: Hongbin Lu >
Reply-To: 

[openstack-dev] [heat] Online video meet up this week (topic:review)

2017-07-11 Thread Rico Lin
Hi Team

We would like to start doing video online meetup and see if we can all
reach better co-work with that.
The reason for doing this is because we're a global team which almost
impossible for all of us to literally sit in the same room in PTG/anywhere
(it's just not possible to ask all companies to shift all their heat
developers/ops to a single event).
Also, we should not wait till PTG or Summit to deal with all face to face
task (at least not for small tasks).
Anyway, we will make our meeting this week as an online video meeting. And
try to see if that format works for us or not. And if we doing well, we
might also consider making PTG a hybrid mode.

Here are the details:
Topic: Review
Time: Wednesdays at 1500 UTC on 07/12 (1 hr)
Location: zoom.us (I will notify the specific room location in heat's irc
channel right before the meeting)
Agenda:

1. pre meetup discuss
2. We shall go with patches for BPs and Goals first
3. then pick out some worth review patch for review and land as many as we
can.
4. post meetup discuss (include feedback and suggestion time)

Host: Rico Lin
*Pre-requirement: Please register a Zoom account (zoom.us )
which will be the channel that we will use for this meetup*

With that free account, you should be able to join the meeting. Just
remember to install zoom on your device. And don't worry it's easy to join
and operate.

Do hope more people(core or not) to join this meeting, which might also be
a good chance to get a more clear view of what heat's new feature or bug
fix in detail. We need more reviewer together, so see you there!!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Status update, July 7th

2017-07-11 Thread Dean Troyer
On Fri, Jul 7, 2017 at 3:19 AM, Thierry Carrez  wrote:
> == Need for a TC meeting next Tuesday ==
[...]
> others). Who is up for discussing those items at our usual meeting slot
> time on Tuesday ?

I am unlikely to make the meeting, travel plans are more fluid than I
would like today.  Will be there if possible.

dt

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TR: [tricircle]

2017-07-11 Thread Morales, Victor
Hi Meher,

I don’t think that you need to create those folders or at least that it’s shown 
in the devstack functions[1].

Regards/Saludos
Victor Morales

[1] https://github.com/openstack-dev/devstack/blob/master/lib/apache#L178-L192

From: "meher.h...@orange.com" 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday, July 11, 2017 at 7:51 AM
To: "openstack-dev@lists.openstack.org" 
Subject: [openstack-dev] TR: [tricircle]



[ogo Orange]

Meher Hihi
Intern
ORANGE/IMT/OLN/WTC/CMA/MAX
Fixe : +33 2 96 07 03 
71
Mobile : +33 7 58 38 68 
87
meher.h...@orange.com


De : HIHI Meher IMT/OLN
Envoyé : mardi 11 juillet 2017 14:50
À : HIHI Meher IMT/OLN
Objet : RE: [openstack-dev][tricircle]

Hi Zhiyuan,

Thank you for the response! So, in this case, I just need to create two 
"sites-available" and "sites-enabled" folders under /etc/ httpd and put in the 
config files found in /etc/httpd/conf.d/?

Regards,

Meher

[ogo Orange]

Meher Hihi
Intern
ORANGE/IMT/OLN/WTC/CMA/MAX
Fixe : +33 2 96 07 03 
71
Mobile : +33 7 58 38 68 
87
meher.h...@orange.com


De : HIHI Meher IMT/OLN
Envoyé : lundi 10 juillet 2017 16:10
À : 'openstack-dev@lists.openstack.org'
Objet : RE: [openstack-dev][tricircle]

Hello everybody,

I posted before a problem related to installing the tricircle on a single node, 
the script stopped with a keystone startup. You advised me to see the / etc / 
apache2 / sites-enabled folder to see if the keystone config files are 
included. But, I have not found this folder, yet the httpd service is properly 
installed, the name of this file changes according to the distribution? I use 
RHEL 7, thank you in advance!

Meher

[ogo Orange]

Meher Hihi
Intern
ORANGE/IMT/OLN/WTC/CMA/MAX
Fixe : +33 2 96 07 03 
71
Mobile : +33 7 58 38 68 
87
meher.h...@orange.com


De : HIHI Meher IMT/OLN
Envoyé : mercredi 28 juin 2017 15:12
À : 'openstack-dev@lists.openstack.org'
Objet : [openstack-dev][tricircle]

Hello everyone,

I introduce myself; Meher Hihi; I am doing my internship at Orange Labs 
Networks Lannion-France for the diploma of computer network and 
telecommunications engineer.

I am working on innovative distribution solutions for the virtualization 
infrastructure of the network functions and more specifically on the Openstack 
Tricircle solution, which is why I join your community to participate in your 
discussions and learn from your advice.

Indeed, I try to install Tricircle on a single node by following this 
documentation 
“https://docs.openstack.org/developer/tricircle/installation-guide.html#single-pod-installation-with-devstack”.
I managed to install Devstack without any problems, but when I modify the 
local.conf file by adding the Tricircle plugin integration and the HOST_IP, the 
script does not want to work and stops on an error of Start of the Keystone 
service.

I wanted to know if the problem is with my config file that is attached or I 
lack other things to configure. You will also find in the file the IP address 
of the machine.

I thank you in advance for the help you will bring me. Sincerely,

Best regards,

Meher

[ogo Orange]

Meher Hihi
Intern
ORANGE/IMT/OLN/WNI/ODIS/NAVI
Fixe : +33 2 96 07 03 
71
Mobile : +33 7 58 38 68 

Re: [openstack-dev] [heat][ironic][telemetry][dragonflow][freezer][kuryr][manila][mistral][monasca][neutron][ansible][congress][rally][senlin][storlets][zun][docs] repos without signs of migration sta

2017-07-11 Thread Doug Hellmann
Excerpts from Akihiro Motoki's message of 2017-07-11 23:56:30 +0900:
> Thanks for update the status!
> 
> > openstack/networking-midonet
> 
> The doc-migration work networking-midonet has almost completed.
> Unfortunately they do not use 'doc-migration' topic as it seems they
> already started
> the document overhaul before the doc-migration starts.
> https://review.openstack.org/#/q/topic:bug/1692788
> Hopefully midonet folks use 'doc-migration' tag for at least one patch.

OK, that's good to know.

> 
> > openstack/networking-vsphere
> 
> This is not a project under the TC governance.

I have removed it from the list.

Thanks,
Doug

> 
> Akihiro
> 
> 2017-07-11 2:26 GMT+09:00 Doug Hellmann :
> > According to the dashboard, it looks like we still have almost 100
> > repositories with documentation that have no patches with the
> > doc-migration topic, indicating that they have not started moving
> > content or updating the theme. I have tried to tag those teams in the
> > subject, but I may have missed some. Please check the list below for a
> > repo owned by your team.
> >
> > If you have completed the work and the dasbhoard script didn't pick
> > it up, please let me know so I can fix up the data.
> >
> > Doug
> >
> > openstack-dev/heat-cfnclient
> > openstack/bifrost
> > openstack/ceilometer-powervm
> > openstack/ceilometermiddleware
> > openstack/diskimage-builder
> > openstack/dragonflow
> > openstack/freezer-api
> > openstack/freezer-dr
> > openstack/freezer-web-ui
> > openstack/fuxi
> > openstack/fuxi-kubernetes
> > openstack/heat
> > openstack/heat-cfntools
> > openstack/heat-translator
> > openstack/instack
> > openstack/ironic-lib
> > openstack/karbor-dashboard
> > openstack/kolla-kubernetes
> > openstack/manila
> > openstack/manila-image-elements
> > openstack/manila-ui
> > openstack/mistral-dashboard
> > openstack/mistral-extra
> > openstack/mistral-lib
> > openstack/molteniron
> > openstack/monasca-statsd
> > openstack/monasca-transform
> > openstack/networking-hyperv
> > openstack/networking-midonet
> > openstack/networking-vsphere
> > openstack/neutron-lbaas
> > openstack/neutron-lbaas-dashboard
> > openstack/octavia-dashboard
> > openstack/openstack-ansible-apt_package_pinning
> > openstack/openstack-ansible-ceph_client
> > openstack/openstack-ansible-galera_client
> > openstack/openstack-ansible-galera_server
> > openstack/openstack-ansible-haproxy_server
> > openstack/openstack-ansible-lxc_container_create
> > openstack/openstack-ansible-lxc_hosts
> > openstack/openstack-ansible-memcached_server
> > openstack/openstack-ansible-openstack_hosts
> > openstack/openstack-ansible-openstack_openrc
> > openstack/openstack-ansible-os_aodh
> > openstack/openstack-ansible-os_barbican
> > openstack/openstack-ansible-os_ceilometer
> > openstack/openstack-ansible-os_cinder
> > openstack/openstack-ansible-os_designate
> > openstack/openstack-ansible-os_glance
> > openstack/openstack-ansible-os_gnocchi
> > openstack/openstack-ansible-os_heat
> > openstack/openstack-ansible-os_horizon
> > openstack/openstack-ansible-os_ironic
> > openstack/openstack-ansible-os_keystone
> > openstack/openstack-ansible-os_magnum
> > openstack/openstack-ansible-os_molteniron
> > openstack/openstack-ansible-os_neutron
> > openstack/openstack-ansible-os_nova
> > openstack/openstack-ansible-os_octavia
> > openstack/openstack-ansible-os_rally
> > openstack/openstack-ansible-os_sahara
> > openstack/openstack-ansible-os_swift
> > openstack/openstack-ansible-os_tempest
> > openstack/openstack-ansible-os_trove
> > openstack/openstack-ansible-pip_install
> > openstack/openstack-ansible-plugins
> > openstack/openstack-ansible-rabbitmq_server
> > openstack/openstack-ansible-repo_build
> > openstack/openstack-ansible-repo_server
> > openstack/openstack-ansible-rsyslog_client
> > openstack/openstack-ansible-rsyslog_server
> > openstack/openstack-ansible-security
> > openstack/os-net-config
> > openstack/os-win
> > openstack/osc-placement
> > openstack/oslosphinx
> > openstack/pycadf
> > openstack/python-congressclient
> > openstack/python-ironic-inspector-client
> > openstack/python-manilaclient
> > openstack/python-octaviaclient
> > openstack/python-saharaclient
> > openstack/python-tricircleclient
> > openstack/python-tripleoclient
> > openstack/python-vitrageclient
> > openstack/python-zaqarclient
> > openstack/python-zunclient
> > openstack/rally
> > openstack/searchlight-ui
> > openstack/senlin
> > openstack/storlets
> > openstack/sushy
> > openstack/sushy-tools
> > openstack/tosca-parser
> > openstack/virtualbmc
> > openstack/watcher-dashboard
> > openstack/yaql
> > openstack/zun
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


Re: [openstack-dev] [heat][ironic][telemetry][dragonflow][freezer][kuryr][manila][mistral][monasca][neutron][ansible][congress][rally][senlin][storlets][zun][docs] repos without signs of migration sta

2017-07-11 Thread Jeremy Stanley
On 2017-07-11 23:56:30 +0900 (+0900), Akihiro Motoki wrote:
> 2017-07-11 2:26 GMT+09:00 Doug Hellmann :
[...]
> > openstack/networking-vsphere
[...]
> This is not a project under the TC governance.

And as such, also seems to correctly not be publishing any
documentation on docs.openstack.org so should be unaffected.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] We still have a not identical HEAD response

2017-07-11 Thread Andrea Frittoli
On Tue, Jul 11, 2017 at 12:06 PM Attila Fazekas  wrote:

> Hi all,
>
> Long time ago it was discussed to make the keystone HEAD responses
>  right [1] as the RFC [2][3] recommends:
>
> "  A response to the HEAD method is identical to what an equivalent
>request made with a GET would have been, except it lacks a body. "
>
> So, the status code needs to be identical as well !
>
> Recently  turned out, keystone is still not correct in all cases [4].
>
> 'Get role inference rule' (GET), 'Confirm role inference rule' (HEAD)
>  has the same URL pattern, but they differs in the status code (200/204)
>  which is not allowed! [5]
>
> This is the only documented case where both the HEAD and GET defined and
> the HEAD has a 204 response.
>
> Are you going to fix this [4] as it was fixed before [6] ?
>
> Best Regards,
> Attila
>
> PS.:
>  Here is the tempest change for accepting the right code [7].
>

On Tempest side, adding a new accepted code into the test would open the
doors to an API backward incompatible change on keystone side.
It should be possible for tests (as for a real application) to discover
what kind of behaviour is available on server side, the 204 or the 200
response.
Otherwise any existing code that expects a 204 response will cease to work
against clouds running a new version of keystone with the code change in.
If keystone would use a micro-version for this, Tempest would cap the
existing test to a max micro-version, and develop a new test for the new
behaviour.

andrea


>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2014-July/039140.html
> [2] https://tools.ietf.org/html/rfc7231#section-4.3.2
> [3] https://tools.ietf.org/html/rfc7234#section-4.3.5
> [4] https://bugs.launchpad.net/keystone/+bug/1701541
> [5]
> https://developer.openstack.org/api-ref/identity/v3/?expanded=confirm-role-inference-rule-detail,get-role-inference-rule-detail
> [6] https://bugs.launchpad.net/keystone/+bug/1334368
> [7] https://review.openstack.org/#/c/479286/
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Migration from Neutron ML2OVS to OVN

2017-07-11 Thread Numan Siddique
Hello Tripleo team,

I have few questios regarding migration from neutron ML2OVS to OVN. Below
are some of the requirements

 - We want to migrate an existing depoyment from Neutroon default ML2OVS to
OVN
 - We are targetting this for tripleo Queen's release.
 - The plan is to first upgrade the tripleo deployment from Pike to Queens
with no changes to neutron. i.e with neutron ML2OVS. Once the upgrade is
done, we want to migrate to OVN.
 - The migration process will stop all the neutron agents, configure
neutron server to load OVN mechanism driver and start OVN services (with no
or very limited datapath downtime).
 - The migration would be handled by an ansible script. We have a PoC
ansible script which can be found here [1]

And the questions are
-  (A broad question) - What is the right way to migrate and switch the
neutron plugin ? Can the stack upgrade handle the migration as well ?
- The migration procedure should be part of tripleo ? or can it be a
standalone ansible script ? (I presume it should be former).
- If it should be part of the tripleo then what would be the command to do
it ? A update stack command with appropriate environment files for OVN ?
- In case the migration can be done  as a standalone script, how to handle
later updates/upgrades since tripleo wouldn't be aware of the migration ?


Request to provide your comments so that we can move in the right direction.

[1] - https://github.com/openstack/networking-ovn/tree/master/migration

Thanks
Numan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] office hours reminder

2017-07-11 Thread Lance Bragstad
Hey all,

Just a quick reminder that today we will be holding office hours after
the keystone meeting [0]. See you there!

Thanks,

Lance

[0] http://eavesdrop.openstack.org/#Keystone_Team_Meeting




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][ironic][telemetry][dragonflow][freezer][kuryr][manila][mistral][monasca][neutron][ansible][congress][rally][senlin][storlets][zun][docs] repos without signs of migration sta

2017-07-11 Thread Akihiro Motoki
Thanks for update the status!

> openstack/networking-midonet

The doc-migration work networking-midonet has almost completed.
Unfortunately they do not use 'doc-migration' topic as it seems they
already started
the document overhaul before the doc-migration starts.
https://review.openstack.org/#/q/topic:bug/1692788
Hopefully midonet folks use 'doc-migration' tag for at least one patch.

> openstack/networking-vsphere

This is not a project under the TC governance.

Akihiro

2017-07-11 2:26 GMT+09:00 Doug Hellmann :
> According to the dashboard, it looks like we still have almost 100
> repositories with documentation that have no patches with the
> doc-migration topic, indicating that they have not started moving
> content or updating the theme. I have tried to tag those teams in the
> subject, but I may have missed some. Please check the list below for a
> repo owned by your team.
>
> If you have completed the work and the dasbhoard script didn't pick
> it up, please let me know so I can fix up the data.
>
> Doug
>
> openstack-dev/heat-cfnclient
> openstack/bifrost
> openstack/ceilometer-powervm
> openstack/ceilometermiddleware
> openstack/diskimage-builder
> openstack/dragonflow
> openstack/freezer-api
> openstack/freezer-dr
> openstack/freezer-web-ui
> openstack/fuxi
> openstack/fuxi-kubernetes
> openstack/heat
> openstack/heat-cfntools
> openstack/heat-translator
> openstack/instack
> openstack/ironic-lib
> openstack/karbor-dashboard
> openstack/kolla-kubernetes
> openstack/manila
> openstack/manila-image-elements
> openstack/manila-ui
> openstack/mistral-dashboard
> openstack/mistral-extra
> openstack/mistral-lib
> openstack/molteniron
> openstack/monasca-statsd
> openstack/monasca-transform
> openstack/networking-hyperv
> openstack/networking-midonet
> openstack/networking-vsphere
> openstack/neutron-lbaas
> openstack/neutron-lbaas-dashboard
> openstack/octavia-dashboard
> openstack/openstack-ansible-apt_package_pinning
> openstack/openstack-ansible-ceph_client
> openstack/openstack-ansible-galera_client
> openstack/openstack-ansible-galera_server
> openstack/openstack-ansible-haproxy_server
> openstack/openstack-ansible-lxc_container_create
> openstack/openstack-ansible-lxc_hosts
> openstack/openstack-ansible-memcached_server
> openstack/openstack-ansible-openstack_hosts
> openstack/openstack-ansible-openstack_openrc
> openstack/openstack-ansible-os_aodh
> openstack/openstack-ansible-os_barbican
> openstack/openstack-ansible-os_ceilometer
> openstack/openstack-ansible-os_cinder
> openstack/openstack-ansible-os_designate
> openstack/openstack-ansible-os_glance
> openstack/openstack-ansible-os_gnocchi
> openstack/openstack-ansible-os_heat
> openstack/openstack-ansible-os_horizon
> openstack/openstack-ansible-os_ironic
> openstack/openstack-ansible-os_keystone
> openstack/openstack-ansible-os_magnum
> openstack/openstack-ansible-os_molteniron
> openstack/openstack-ansible-os_neutron
> openstack/openstack-ansible-os_nova
> openstack/openstack-ansible-os_octavia
> openstack/openstack-ansible-os_rally
> openstack/openstack-ansible-os_sahara
> openstack/openstack-ansible-os_swift
> openstack/openstack-ansible-os_tempest
> openstack/openstack-ansible-os_trove
> openstack/openstack-ansible-pip_install
> openstack/openstack-ansible-plugins
> openstack/openstack-ansible-rabbitmq_server
> openstack/openstack-ansible-repo_build
> openstack/openstack-ansible-repo_server
> openstack/openstack-ansible-rsyslog_client
> openstack/openstack-ansible-rsyslog_server
> openstack/openstack-ansible-security
> openstack/os-net-config
> openstack/os-win
> openstack/osc-placement
> openstack/oslosphinx
> openstack/pycadf
> openstack/python-congressclient
> openstack/python-ironic-inspector-client
> openstack/python-manilaclient
> openstack/python-octaviaclient
> openstack/python-saharaclient
> openstack/python-tricircleclient
> openstack/python-tripleoclient
> openstack/python-vitrageclient
> openstack/python-zaqarclient
> openstack/python-zunclient
> openstack/rally
> openstack/searchlight-ui
> openstack/senlin
> openstack/storlets
> openstack/sushy
> openstack/sushy-tools
> openstack/tosca-parser
> openstack/virtualbmc
> openstack/watcher-dashboard
> openstack/yaql
> openstack/zun
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][ironic][telemetry][dragonflow][freezer][kuryr][manila][mistral][monasca][neutron][ansible][congress][rally][senlin][storlets][zun][docs] repos without signs of migration sta

2017-07-11 Thread Doug Hellmann
Excerpts from gordon chung's message of 2017-07-11 14:23:41 +:
> 
> On 10/07/17 01:26 PM, Doug Hellmann wrote:
> > openstack/ceilometer-powervm
> > openstack/ceilometermiddleware
> 
> i don't believe there are docs for these. ceilometermiddleware is a 
> simple wsgi middleware and it's usage is part of ceilometer's install 
> docs. ceilometer-powervm contains the powervm driver for ceilometer's 
> polling agent.

I've removed them from the tracking list for now, but it seems like
both are likely to have contributor documentation, at least, and
the driver would likely have installation and configuration docs,
right?

> 
> i missed this but how do we handle smaller add-on type repos like this? 
> i imagine we want to keep docs grouped by project so they are not 
> scattered across the same level.
> 
> cheers,
> 

The new "rule" for docs is "The documentation for something should
live in the same repository as the code."

Hyperlinks are easy and free, so we can use those to ensure that the
results are easy to find. We're building up a nice set of landing pages
for "all of the admin guides" and "all of the configuration references"
and so on within the openstack-manuals repository.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] failed to download image from TryStack

2017-07-11 Thread Jeremy Stanley
On 2017-07-11 17:24:55 +0800 (+0800), Jeffrey Zhang wrote:
[...]
> first of all, this site is not trystack, it is tarballs.openstack.org.
> 
> I asked openstack infra team. got following feedback
> 
> > this was disabled yesterday since the images produced massive
> > load. fungi started putting up a caching proxy for these.
> 
> This is disabled and will be enabled in the future.

Correct, the recent increase in CI jobs repeatedly retrieving 2-4GiB
files from the tarballs site resulted in a denial of service for all
other sites we're also serving from that system (governance,
releases, security, specs, service-types, trystack, logs,
docs-drafts). We're nearly finished with a series of patches and
manual adjustments to our CI mirror and proxy network to allow files
from the tarballs site to be retrieved through caching proxies local
to nodes running jobs, thereby distributing the load from these
large artifacts much more broadly.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zun] Some general ZUN use case / drivers type questions

2017-07-11 Thread Waines, Greg
Hey Hongbin,

is there a way to display ZUN’s resource usage ?
i.e. analogous to nova’s “nova hypervisor-show ”
e.g. memory usages, cpu usage, etc .

Greg.


From: Hongbin Lu 
Date: Friday, July 7, 2017 at 2:08 PM
To: Greg Waines , 
"openstack-dev@lists.openstack.org" 
Subject: RE: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hi Greg,

Zun currently leverages the “--memory", “--cpu-period”, and “--cpu-quota” 
options to limit the CPU and memory. Zun does do resource tracking and 
scheduling right now, but this is temporary. The long-term plan is to switch to 
the Placement API [1] after it is spited out from Nova.

[1] https://docs.openstack.org/nova/latest/placement.html

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-07-17 11:00 AM
To: Hongbin Lu; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Thanks Hongbin.

I’ve got zun setup in devstack now, so will play with it a bit to better 
understand.

Although a couple more questions (sorry)

• in the current zun implementation of containers directly on compute 
nodes,
does zun leverage any of the docker capabilities to restrict the amount of 
resources used by a container ?
e.g. the amount and which cpu cores the container’s processes are allowed to 
use,
 how much memory the container is allowed to access/use, etc.

e.g. see https://docs.docker.com/engine/admin/resource_constraints/

• and then,
I know you mentioned that the assumption is that there are separate 
availability zones for zun and nova.

obut does zun do Resource Tracking and Scheduling based on that Resource 
Tracking for the nodes its using ?

Greg.


From: Hongbin Lu >
Date: Friday, July 7, 2017 at 10:42 AM
To: Greg Waines >, 
"openstack-dev@lists.openstack.org" 
>
Subject: RE: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hi Greg,

Sorry for the confusion. I used the term “container-in-baremetal” to refer to a 
deployment pattern that containers are running on physical compute nodes (not 
an instance provided by Nova/Ironic). I think your second interpretation is 
right if “OpenStack Hosts” means a compute node. I think a diagram [1] could 
explain the current deployment scenario better.

For the container-in-coe scenario, it is out of the current focus but the team 
is exploring ideas on it. I don’t have specific answers for the two questions 
you raised but I encourage you to bring up your use cases to the team and keep 
the discussion open.

[1] https://www.slideshare.net/hongbin034/clipboards/zun-deployment

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-07-17 7:05 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Hongbin Lu
Subject: Re: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hongbin,
Thanks for the responses.
A couple of follow up, clarifying questions ...


• You mentioned that currently Zun supports the container-in-baremetal 
scenario

ois this done by leveraging Ironic baremetal service ?

•  e.g. does Zun launch an Ironic baremetal instance (running docker) in order 
to host containers being launched by Zun ?

oOR

odo you must mean that, in this scenario, OpenStack Hosts are 
deployed with docker software,
and Zun expects docker to be running on each OpenStack Host, in order to launch 
its containers ?



• In the future, when Zun supports the container-in-coe scenario

ois the idea that the COE (Kubernetes or Swarm) will abstract from Zun 
whether the COE’s minion nodes are OpenStack VMs or OpenStack Baremetal 
Instances (or OpenStack Hosts) ?

ois the idea that Magnum will support launching COEs with VM minion nodes 
and/or Baremetal minion nodes ?


Greg.


From: Hongbin Lu >
Reply-To: 
"openstack-dev@lists.openstack.org" 
>
Date: Thursday, July 6, 2017 at 2:39 PM
To: 
"openstack-dev@lists.openstack.org" 
>
Subject: Re: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hi Greg,

Please find my replies inline.

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-06-17 11:49 AM
To: openstack-dev@lists.openstack.org
Subject: 

Re: [openstack-dev] [heat][ironic][telemetry][dragonflow][freezer][kuryr][manila][mistral][monasca][neutron][ansible][congress][rally][senlin][storlets][zun][docs] repos without signs of migration sta

2017-07-11 Thread Doug Hellmann
Excerpts from Renat Akhmerov's message of 2017-07-11 12:08:23 +0700:
> On 11 Jul 2017, 00:27 +0700, wrote:
> 
> > openstack/mistral-dashboard
> > openstack/mistral-extra
> 
> These two are not supposed to have docs at all. We should probably just 
> remove the “doc” folder and corresponding CI jobs.

OK, I removed them from the list of repos we are tracking.

> 
> >  openstack/mistral-lib
> 
> This should be taken care of soon.

Thanks!


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][ironic][telemetry][dragonflow][freezer][kuryr][manila][mistral][monasca][neutron][ansible][congress][rally][senlin][storlets][zun][docs] repos without signs of migration sta

2017-07-11 Thread gordon chung


On 10/07/17 01:26 PM, Doug Hellmann wrote:
> openstack/ceilometer-powervm
> openstack/ceilometermiddleware

i don't believe there are docs for these. ceilometermiddleware is a 
simple wsgi middleware and it's usage is part of ceilometer's install 
docs. ceilometer-powervm contains the powervm driver for ceilometer's 
polling agent.

i missed this but how do we handle smaller add-on type repos like this? 
i imagine we want to keep docs grouped by project so they are not 
scattered across the same level.

cheers,

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glare][TC][Interop] Application for inclusion of Glare in the list of official projects

2017-07-11 Thread Jeremy Stanley
On 2017-07-11 13:47:41 +0200 (+0200), Flavio Percoco wrote:
[...]
> Anything that delays the transition to the new thing is providing
> a fake bridge to the users. It's a bridge that will be blown-up
> eventually.
> 
> To make a hypothetical transition from Glance to Glare works
> smoothly, we should first figure out how to migrate the database
> (assuming this has not been done yet), how to migrate the images,
> etc. Only when these things have been figured out, I'd start
> worrying about what compatibility layer we want to provide. The
> answer could also be: "Hey, we're sorry but, the best thing you
> can do is to migrate your code base as soon as possible".
[...]

There's also this challenge which we'd need to deal with eventually,
and which would require substantial transition planning and
messaging:

http://git.openstack.org/cgit/openstack/interop/tree/2017.01.json?id=b397863#n3705

http://git.openstack.org/cgit/openstack/interop/tree/doc/source/process/CoreCriteria.rst?id=d8fb682#n62

http://git.openstack.org/cgit/openstack/governance/tree/reference/tags/tc_approved-release.rst?id=a2b9702#n82

If we decide to go forward with such an action, it will be a _very_
lengthy process and one we shouldn't enter into without a
significant amount of certainty that we can see it through to
completion.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][UI][UX] Improving the Tag Nodes and Assign Nodes to Roles workflow

2017-07-11 Thread Liz Blanchard
Hi All,

I'm trying out a new way of sharing design proposals based on user feedback
and I've put together a presentation showing an area of improvement in the
TripleO UI along with my suggestions moving forward.

I'd love to have a discussion on this and continue to make the proposal
even better before getting into the development phase.

https://docs.google.com/presentation/d/140zInQ5rZoQdTDQVXVfMuZQnQSt6DnW8NS9Pugmg_Sc/edit?usp=sharing

Please feel free to comment directly on the presentation, via this email,
or if folks would like to have a meeting to discuss, I'm happy to set that
up.

Thanks!
Liz
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] We still have a not identical HEAD response

2017-07-11 Thread Lance Bragstad
Based on the comments and opinions in the original thread, I think a fix
for this is justified. I wouldn't mind running this by the TC to double
check that nothing has changed from the first time we had to fix this
issue though.


On 07/11/2017 06:03 AM, Attila Fazekas wrote:
> Hi all,
>
> Long time ago it was discussed to make the keystone HEAD responses
>  right [1] as the RFC [2][3] recommends:
>
> "  A response to the HEAD method is identical to what an equivalent
>request made with a GET would have been, except it lacks a body. "
>
> So, the status code needs to be identical as well !
>
> Recently  turned out, keystone is still not correct in all cases [4].
>
> 'Get role inference rule' (GET), 'Confirm role inference rule' (HEAD)
>  has the same URL pattern, but they differs in the status code (200/204)
>  which is not allowed! [5]
>
> This is the only documented case where both the HEAD and GET defined and
> the HEAD has a 204 response.
>
> Are you going to fix this [4] as it was fixed before [6] ?
>
> Best Regards,
> Attila
>
> PS.:
>  Here is the tempest change for accepting the right code [7].
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2014-July/039140.html
> [2] https://tools.ietf.org/html/rfc7231#section-4.3.2
> [3] https://tools.ietf.org/html/rfc7234#section-4.3.5
> [4] https://bugs.launchpad.net/keystone/+bug/1701541
> [5]
> https://developer.openstack.org/api-ref/identity/v3/?expanded=confirm-role-inference-rule-detail,get-role-inference-rule-detail
> [6] https://bugs.launchpad.net/keystone/+bug/1334368
> [7] https://review.openstack.org/#/c/479286/
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TR: [tricircle]

2017-07-11 Thread meher.hihi


[Logo Orange]

Meher Hihi
Intern
ORANGE/IMT/OLN/WTC/CMA/MAX
Fixe : +33 2 96 07 03 
71
Mobile : +33 7 58 38 68 
87
meher.h...@orange.com


De : HIHI Meher IMT/OLN
Envoyé : mardi 11 juillet 2017 14:50
À : HIHI Meher IMT/OLN
Objet : RE: [openstack-dev][tricircle]

Hi Zhiyuan,

Thank you for the response! So, in this case, I just need to create two 
"sites-available" and "sites-enabled" folders under /etc/ httpd and put in the 
config files found in /etc/httpd/conf.d/?

Regards,

Meher

[Logo Orange]

Meher Hihi
Intern
ORANGE/IMT/OLN/WTC/CMA/MAX
Fixe : +33 2 96 07 03 
71
Mobile : +33 7 58 38 68 
87
meher.h...@orange.com


De : HIHI Meher IMT/OLN
Envoyé : lundi 10 juillet 2017 16:10
À : 'openstack-dev@lists.openstack.org'
Objet : RE: [openstack-dev][tricircle]

Hello everybody,

I posted before a problem related to installing the tricircle on a single node, 
the script stopped with a keystone startup. You advised me to see the / etc / 
apache2 / sites-enabled folder to see if the keystone config files are 
included. But, I have not found this folder, yet the httpd service is properly 
installed, the name of this file changes according to the distribution? I use 
RHEL 7, thank you in advance!

Meher

[Logo Orange]

Meher Hihi
Intern
ORANGE/IMT/OLN/WTC/CMA/MAX
Fixe : +33 2 96 07 03 
71
Mobile : +33 7 58 38 68 
87
meher.h...@orange.com


De : HIHI Meher IMT/OLN
Envoyé : mercredi 28 juin 2017 15:12
À : 'openstack-dev@lists.openstack.org'
Objet : [openstack-dev][tricircle]

Hello everyone,

I introduce myself; Meher Hihi; I am doing my internship at Orange Labs 
Networks Lannion-France for the diploma of computer network and 
telecommunications engineer.

I am working on innovative distribution solutions for the virtualization 
infrastructure of the network functions and more specifically on the Openstack 
Tricircle solution, which is why I join your community to participate in your 
discussions and learn from your advice.

Indeed, I try to install Tricircle on a single node by following this 
documentation 
"https://docs.openstack.org/developer/tricircle/installation-guide.html#single-pod-installation-with-devstack;.
I managed to install Devstack without any problems, but when I modify the 
local.conf file by adding the Tricircle plugin integration and the HOST_IP, the 
script does not want to work and stops on an error of Start of the Keystone 
service.

I wanted to know if the problem is with my config file that is attached or I 
lack other things to configure. You will also find in the file the IP address 
of the machine.

I thank you in advance for the help you will bring me. Sincerely,

Best regards,

Meher

[Logo Orange]

Meher Hihi
Intern
ORANGE/IMT/OLN/WNI/ODIS/NAVI
Fixe : +33 2 96 07 03 
71
Mobile : +33 7 58 38 68 
87
meher.h...@orange.com



_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles 

Re: [openstack-dev] [Glare][TC] Application for inclusion of Glare in the list of official projects

2017-07-11 Thread Melvin Hillsman
++ As I was not sure how to word it without sounding too opinionated
without appropriate technical jargon. When most folks hear "mostly the
same" regarding a critical component, and sometimes not so critical ones,
that raises all kinds of red flags. I could not think from purely code
aspect of what that means but from operations it means potentially it could
affect the bottom line and well, that affects everyone generally :)

On Tue, Jul 11, 2017 at 6:45 AM, Davanum Srinivas  wrote:

> On Tue, Jul 11, 2017 at 7:33 AM, Chris Dent 
> wrote:
> > On Tue, 11 Jul 2017, Mikhail Fedosin wrote:
> >
> >> For example, deactivating an image in Glance looks like *POST*
> >> /v2/images/{image_id}/actions/deactivate with empty body.
> >> At one time, Chris Dent advised us to avoid such decisions, and simply
> >> change the status of the artifact to 'deactivated' using *PATCH*, which
> we
> >> did.
> >
> >
> > Indeed I did. The point of that was to avoid "actions" style URLs on
> > resources that already have that information in their
> > representations so that the interface is more RESTful and doesn't
> > have a profusion of verby URLs. The other option is to PUT a full
> > representation with the status changed.
> >
> > But that's not the point here. The issue is that in order for Glare
> > to provide a seamless compatibility layer with Glance it needs to be
> > able to present a facade which is _identical_ to Glance. Not mostly
> > the same but with improvement, but identical with all the same
> > warts.
>
> Big +1 to "Not mostly the same but with improvement, but identical
> with all the same warts.". Anything else is a deal breaker IMHO.
>
> Thanks,
> Dims
>
> >
> > This provides a critical part in a smooth migration plan. As people
> > become aware of glare being there, they can start taking advantage
> > of the new features in their new code or code that they are ready to
> > update, without having to update old stuff.
> >
> > If Glare has fairly good separation between the code that handles
> > URLs and processes bodies (in and out) and the code that does stuff
> > with those bodies[1], it ought to be somewhat straightforward to
> > create such a facade.
> >
> > [1] Not gonna use model, view, controller here; those terms have
> > never been accurate for web-based APIs.
> >
> >
> >
> > --
> > Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
> > freenode: cdent tw: @anticdent
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
-- 
Kind regards,

Melvin Hillsman
mrhills...@gmail.com
mobile: (832) 264-2646

Learner | Ideation | Belief | Responsibility | Command
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glare][TC] Application for inclusion of Glare in the list of official projects

2017-07-11 Thread Flavio Percoco

On 11/07/17 14:20 +0300, Mikhail Fedosin wrote:

On Tue, Jul 11, 2017 at 1:43 AM, Monty Taylor  wrote:


On 07/10/2017 04:31 PM, Mikhail Fedosin wrote:

Third, all these changes can be hidden in Glare client. So if we try a
little, we can achieve 100% compatibility there, and other projects can use
Glare client instead of Glance's without even noticing the differences.



I think we should definitely not do this... I think instead, if we decide
to go down this road, we want to look at adding an endpoint to glare that
speaks glance v2 API so that users can have a transition period while
libraries and tools get updated to understand the artifacts API.



This is optional and depends on the project developers. For my part, I can
only offer the most compatible client, so that the Glance module can be
simply copied into the new Glare module.


Unfortunately, adding this sort of logic to the client is almost never the right
choice. To be completely honest, I'm not even convinced having a Glance-like API
in Glare is the right thing to do. As soon as that API hits the codebase, you'll
have to maintain it.

Anything that delays the transition to the new thing is providing a fake bridge
to the users. It's a bridge that will be blown-up eventually.

To make a hypothetical transition from Glance to Glare works smoothly, we should
first figure out how to migrate the database (assuming this has not been done
yet), how to migrate the images, etc. Only when these things have been figured
out, I'd start worrying about what compatibility layer we want to provide. The
answer could also be: "Hey, we're sorry but, the best thing you can do is to
migrate your code base as soon as possible".


If projects use Glance without client, it means that some direct API

requests will need to be rewritten. But in any case, the number of
differences between Glance v1 and Glance v2 was much larger, and we
switched pretty smoothly. So I hope everything will be fine here, too.



v1 vs v2 is still a major headache for end users. I don't think it's ok
for us to do that to our users again if we can help it.

However, as you said, conceptually the calls are very similar so making an
API controller that can be registered in the catalog as "image" should be
fairly easy to do, no?


Indeed, the interfaces are almost identical. And all the differences were
made on purpose.

For example, deactivating an image in Glance looks like *POST*
/v2/images/{image_id}/actions/deactivate with empty body.
At one time, Chris Dent advised us to avoid such decisions, and simply
change the status of the artifact to 'deactivated' using *PATCH*, which we
did.


Despite this not being my preferred option, I definitely prefer it over the
"compatible" client library.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glare][TC] Application for inclusion of Glare in the list of official projects

2017-07-11 Thread Davanum Srinivas
On Tue, Jul 11, 2017 at 7:33 AM, Chris Dent  wrote:
> On Tue, 11 Jul 2017, Mikhail Fedosin wrote:
>
>> For example, deactivating an image in Glance looks like *POST*
>> /v2/images/{image_id}/actions/deactivate with empty body.
>> At one time, Chris Dent advised us to avoid such decisions, and simply
>> change the status of the artifact to 'deactivated' using *PATCH*, which we
>> did.
>
>
> Indeed I did. The point of that was to avoid "actions" style URLs on
> resources that already have that information in their
> representations so that the interface is more RESTful and doesn't
> have a profusion of verby URLs. The other option is to PUT a full
> representation with the status changed.
>
> But that's not the point here. The issue is that in order for Glare
> to provide a seamless compatibility layer with Glance it needs to be
> able to present a facade which is _identical_ to Glance. Not mostly
> the same but with improvement, but identical with all the same
> warts.

Big +1 to "Not mostly the same but with improvement, but identical
with all the same warts.". Anything else is a deal breaker IMHO.

Thanks,
Dims

>
> This provides a critical part in a smooth migration plan. As people
> become aware of glare being there, they can start taking advantage
> of the new features in their new code or code that they are ready to
> update, without having to update old stuff.
>
> If Glare has fairly good separation between the code that handles
> URLs and processes bodies (in and out) and the code that does stuff
> with those bodies[1], it ought to be somewhat straightforward to
> create such a facade.
>
> [1] Not gonna use model, view, controller here; those terms have
> never been accurate for web-based APIs.
>
>
>
> --
> Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
> freenode: cdent tw: @anticdent
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glare][TC] Application for inclusion of Glare in the list of official projects

2017-07-11 Thread Chris Dent

On Tue, 11 Jul 2017, Mikhail Fedosin wrote:


For example, deactivating an image in Glance looks like *POST*
/v2/images/{image_id}/actions/deactivate with empty body.
At one time, Chris Dent advised us to avoid such decisions, and simply
change the status of the artifact to 'deactivated' using *PATCH*, which we
did.


Indeed I did. The point of that was to avoid "actions" style URLs on
resources that already have that information in their
representations so that the interface is more RESTful and doesn't
have a profusion of verby URLs. The other option is to PUT a full
representation with the status changed.

But that's not the point here. The issue is that in order for Glare
to provide a seamless compatibility layer with Glance it needs to be
able to present a facade which is _identical_ to Glance. Not mostly
the same but with improvement, but identical with all the same
warts.

This provides a critical part in a smooth migration plan. As people
become aware of glare being there, they can start taking advantage
of the new features in their new code or code that they are ready to
update, without having to update old stuff.

If Glare has fairly good separation between the code that handles
URLs and processes bodies (in and out) and the code that does stuff
with those bodies[1], it ought to be somewhat straightforward to
create such a facade.

[1] Not gonna use model, view, controller here; those terms have
never been accurate for web-based APIs.


--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuxi] [kuryr] [stackube] [openstack-helm] [kolla-kubernetes] [magnum] [zun] Projects in the intersection of Kubernetes and OpenStack

2017-07-11 Thread Davanum Srinivas
Folks,

Between the projects listed, we seem to be covering a whole lot of use
cases. Are there others that we are missing? Anyone have other ideas
for things we could be doing that needs a home?

Thanks,
Dims

-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glare][TC] Application for inclusion of Glare in the list of official projects

2017-07-11 Thread Mikhail Fedosin
On Tue, Jul 11, 2017 at 1:43 AM, Monty Taylor  wrote:

> On 07/10/2017 04:31 PM, Mikhail Fedosin wrote:
>
>> Thank you for asking this! It's really very important and interesting, so
>> I'm going to explain those things more detailed.
>>
>> First, when we designed Glare, we kept in mind the compatibility with
>> Glance, and I can tell that Glance data from the database can be ported to
>> Glare with a simple script without any loss.
>>
>> Second, APIs are very similar and map 1:1. The only one big difference is
>> that user has to perform activation manually after image file is uploaded.
>> I created a small table with the most popular API requests. You may notice
>> how similar both APIs are: https://docs.google.com/docume
>> nt/d/18Tqad0NUPyFfHUo1KMr6bDDISpQtzacvZtEQIGhNkf4/edit?usp=sharing
>> Other changes are rather cosmetic. For instance, "queued" image status
>> was renamed to "drafted".
>>
>> Third, all these changes can be hidden in Glare client. So if we try a
>> little, we can achieve 100% compatibility there, and other projects can use
>> Glare client instead of Glance's without even noticing the differences.
>>
>
> I think we should definitely not do this... I think instead, if we decide
> to go down this road, we want to look at adding an endpoint to glare that
> speaks glance v2 API so that users can have a transition period while
> libraries and tools get updated to understand the artifacts API.


This is optional and depends on the project developers. For my part, I can
only offer the most compatible client, so that the Glance module can be
simply copied into the new Glare module.


>
> If projects use Glance without client, it means that some direct API
>> requests will need to be rewritten. But in any case, the number of
>> differences between Glance v1 and Glance v2 was much larger, and we
>> switched pretty smoothly. So I hope everything will be fine here, too.
>>
>
> v1 vs v2 is still a major headache for end users. I don't think it's ok
> for us to do that to our users again if we can help it.
>
> However, as you said, conceptually the calls are very similar so making an
> API controller that can be registered in the catalog as "image" should be
> fairly easy to do, no?
>
Indeed, the interfaces are almost identical. And all the differences were
made on purpose.

For example, deactivating an image in Glance looks like *POST*
/v2/images/{image_id}/actions/deactivate with empty body.
At one time, Chris Dent advised us to avoid such decisions, and simply
change the status of the artifact to 'deactivated' using *PATCH*, which we
did.

>
> Best,
>> Mike Fedosin
>>
>> On Mon, Jul 10, 2017 at 9:55 PM, Joshua Harlow > > wrote:
>>
>> Ed Leafe wrote:
>>
>> On Jul 10, 2017, at 5:06 AM, Mikhail Fedosin > 
>> >> wrote:
>>
>> Given all the advantages and features of Glare, I believe
>> that it can
>> become the successful drop-in replacement.
>>
>>
>> Can you clarify this? Let’s assume I have a decent-sized
>> deployment
>> running Glance. If I were to remove Glance and replace it with
>> Glare,
>> are you saying that nothing would break? Operators, users,
>> scripts,
>> SDKs, etc., would all work unchanged?
>>
>>
>> Sounds interesting,
>>
>> Is there some kind of glance-compat API?
>>
>>
>> -- Ed Leafe
>>
>>
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > subscribe>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>>
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for 

Re: [openstack-dev] [Glare][TC] Application for inclusion of Glare in the list of official projects

2017-07-11 Thread Mikhail Fedosin
On Tue, Jul 11, 2017 at 12:31 AM, Monty Taylor  wrote:

> On 07/10/2017 01:21 PM, Ed Leafe wrote:
>
>> On Jul 10, 2017, at 5:06 AM, Mikhail Fedosin  mfedo...@gmail.com>> wrote:
>>
>> Given all the advantages and features of Glare, I believe that it can
>>> become the successful drop-in replacement.
>>>
>>
>> Can you clarify this? Let’s assume I have a decent-sized deployment
>> running Glance. If I were to remove Glance and replace it with Glare, are
>> you saying that nothing would break? Operators, users, scripts, SDKs, etc.,
>> would all work unchanged?
>>
>
> I also have this question. The glance API is one of the most fundamental
> and basic APIs. You pretty much can't do anything useful on a cloud without
> touching it.
>
> That said - it's not like glare couldn't also do those things - but I'd
> need to understand some real specifics about what a cloud switching from
> glance to glare looks like to the end user.
>
> Also, we have a new upload API designed for glance that took a LARGE
> amount of wrangling to get consensus on. I'd also want to know what this
> situation looks like in glare, if image upload in glare supports all of the
> use-cases that we figured out image upload in glance needed to support. AND
> - there are folks who want import-from which was removed between glance v1
> and v2. Does glare support something in this area?


I think you ask about "copy-from". Then yes, it's planned to implement this
week.


>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glare][TC] Application for inclusion of Glare in the list of official projects

2017-07-11 Thread Mikhail Fedosin
On Tue, Jul 11, 2017 at 12:54 AM, Monty Taylor  wrote:

> On 07/10/2017 04:31 PM, Monty Taylor wrote:
>
>> On 07/10/2017 01:21 PM, Ed Leafe wrote:
>>
>>> On Jul 10, 2017, at 5:06 AM, Mikhail Fedosin >> > wrote:
>>>
>>> Given all the advantages and features of Glare, I believe that it can
 become the successful drop-in replacement.

>>>
>>> Can you clarify this? Let’s assume I have a decent-sized deployment
>>> running Glance. If I were to remove Glance and replace it with Glare, are
>>> you saying that nothing would break? Operators, users, scripts, SDKs, etc.,
>>> would all work unchanged?
>>>
>>
>> I also have this question. The glance API is one of the most fundamental
>> and basic APIs. You pretty much can't do anything useful on a cloud without
>> touching it.
>>
>
> Actually - as an easy first-step - set up a gate job with a devstack that
> has glare and no glance and run shade's functional tests against it. We're
> pretty darned lenient - if you can pass our functional tests then talking
> about stricter things like tempest is worthwhile. If you can't - hopefully
> there will be some clear areas to work on.

Yes again - creating a set of tempest gates in jenkins is our highest
priority. If everything is fine, I'll propose related patches today.


>
>
> That said - it's not like glare couldn't also do those things - but I'd
>> need to understand some real specifics about what a cloud switching from
>> glance to glare looks like to the end user.
>>
>> Also, we have a new upload API designed for glance that took a LARGE
>> amount of wrangling to get consensus on. I'd also want to know what this
>> situation looks like in glare, if image upload in glare supports all of the
>> use-cases that we figured out image upload in glance needed to support. AND
>> - there are folks who want import-from which was removed between glance v1
>> and v2. Does glare support something in this area?
>>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] We still have a not identical HEAD response

2017-07-11 Thread Attila Fazekas
Hi all,

Long time ago it was discussed to make the keystone HEAD responses
 right [1] as the RFC [2][3] recommends:

"  A response to the HEAD method is identical to what an equivalent
   request made with a GET would have been, except it lacks a body. "

So, the status code needs to be identical as well !

Recently  turned out, keystone is still not correct in all cases [4].

'Get role inference rule' (GET), 'Confirm role inference rule' (HEAD)
 has the same URL pattern, but they differs in the status code (200/204)
 which is not allowed! [5]

This is the only documented case where both the HEAD and GET defined and
the HEAD has a 204 response.

Are you going to fix this [4] as it was fixed before [6] ?

Best Regards,
Attila

PS.:
 Here is the tempest change for accepting the right code [7].

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-July/039140.html
[2] https://tools.ietf.org/html/rfc7231#section-4.3.2
[3] https://tools.ietf.org/html/rfc7234#section-4.3.5
[4] https://bugs.launchpad.net/keystone/+bug/1701541
[5]
https://developer.openstack.org/api-ref/identity/v3/?expanded=confirm-role-inference-rule-detail,get-role-inference-rule-detail
[6] https://bugs.launchpad.net/keystone/+bug/1334368
[7] https://review.openstack.org/#/c/479286/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] No CCF meeting

2017-07-11 Thread Duarte Cardoso, Igor
Hi all,

The repository has been updated to pass the gate: 
https://review.openstack.org/#/c/480199/.

There will be no meeting today as we have no outstanding topics to discuss and 
are still working on submitting the first round of code of the Common 
Classification Framework, which is due for pike-3.

Best regards,
Igor.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] why common_options type is dictionary ?

2017-07-11 Thread Margin Hu

If I modify kolla_docker.py , how to debug an ansible module ?

I find 
http://docs.ansible.com/ansible/dev_guide/developing_modules.html, but  
failed to run it as following.


how do you do testing when  develop the module ?

[root@server120 ansible-module]# ansible/hacking/test-module  -m 
./kolla_docker.py
* including generated source, if any, saving to: 
/root/.ansible_module_generated

* ansiballz module detected; extracted module source to: /root/debug_dir
***
RAW OUTPUT

{"msg": "missing required arguments: action", "failed": true, 
"invocation": {"module_args": {"cap_add": [], "graceful_timeout": 10, 
"common_options": {}, "labels": {}, "remove_on_exit": true, 
"tls_verify": false, "restart_retries": 10, "detach": true, 
"security_opt": [], "privileged": false, "api_version": "auto"}}}



***
PARSED OUTPUT
{
"failed": true,
"invocation": {
"module_args": {
"api_version": "auto",
"cap_add": [],
"common_options": {},
"detach": true,
"graceful_timeout": 10,
"labels": {},
"privileged": false,
"remove_on_exit": true,
"restart_retries": 10,
"security_opt": [],
"tls_verify": false
}
},
"msg": "missing required arguments: action"
}

On 7/11 17:50, Margin Hu wrote:


Ok, I see.

I wanted to add "--ulimit " parameter when docker run ,  It seems that 
I have to add an extra parameter for kolla_docker module in 
kolla_docker.py.



On 7/11 17:08, Jeffrey Zhang wrote:

there are lots of non-plain variables in kolla, dict or list in Ansible.

if you do not want to override the dict, you can add following into 
globals.yml file.


docker_common_options:
auth_email: "{{ docker_registry_email }}"
auth_password: "{{ docker_registry_password }}"
auth_registry: "{{ docker_registry }}"
auth_username: "{{ docker_registry_username }}"
environment:
  KOLLA_CONFIG_STRATEGY: "{{ config_strategy }}"
  custom_key: custom value
restart_policy: "{{ docker_restart_policy }}"
restart_retries: "{{ docker_restart_policy_retry }}"


On Tue, Jul 11, 2017 at 4:55 PM, Paul Bourke > wrote:


Because its a series of key value pairs:

https://github.com/openstack/kolla-ansible/blob/master/ansible/group_vars/all.yml#L96-L105



Is there another type you feel would fit better?


On 11/07/17 05:22, Margin Hu wrote:

Hi Guys:

I want to set docker_common_options parameter but find its
type is dictionary.  why?

ansible/roles/zun/tasks/pull.yml:5: common_options: "{{
docker_common_options }}"
tests/test_kolla_docker.py:44:
common_options=dict(required=False, type='dict', default=dict()),





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--
Regards,
Jeffrey Zhang
Blog: http://xcodest.me 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 回复: [devstack] How to start all OpenStack servicesafter restarting system?

2017-07-11 Thread Ricardo Carrillo Cruz
I believe screen is no longer the system to manage services in DevStack,
but systemd is now:

https://docs.openstack.org/devstack/latest/systemd.html

You should just use systemctl to bring up all OpenStack services.

HTH

2017-07-11 11:40 GMT+02:00 zhi :

> Hi, Abhishek.
>
> I have a question about devstack. The file "stack-screenrc" doesn't exist
> when I installed devstack successfully, why? I find all over the devstack
> directory but I can not find it. Could you give me some advice?
>
>
> Thanks
> Zhi Chang
>
> 2016-08-22 16:42 GMT+08:00 wk <304702...@qq.com>:
>
>> my scripts, runs ok:
>>
>> /usr/bin/python /usr/bin/glance-registry 
>> --config-file=/etc/glance/glance-registry.conf
>> &> glance-registry.log &
>> /usr/bin/python /usr//bin/glance-api 
>> --config-file=/etc/glance/glance-api.conf
>> &> glance-api.log &
>> /usr/bin/python /usr/bin/nova-conductor &> nova-conductor.log &
>> sg libvirtd /usr/bin/nova-compute --config-file /etc/nova/nova.conf &>
>> nova-compute.log &
>> /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova.conf
>> &> nova-compute.log &
>> /usr/bin/python /usr/bin/nova-cert &> nova-cert.log &
>> /usr/bin/python /usr/bin/nova-network --config-file /etc/nova/nova.conf
>> &> nova-network.log &
>> /usr/bin/python /usr/bin/nova-scheduler --config-file /etc/nova/nova.conf
>> &> nova-scheduler.log &
>> /usr/bin/python /usr/bin/nova-novncproxy --config-file
>> /etc/nova/nova.conf --web /opt/stack/noVNC &> nova-novncproxy.log &
>> /usr/bin/python /usr/bin/nova-xvpvncproxy --config-file
>> /etc/nova/nova.conf &> nova-xvpvncproxy.log &
>> /usr/bin/python /usr/bin/nova-consoleauth &> nova-consoleauth.log &
>> /usr/bin/python /usr/bin/nova-objectstore &> nova-objectstore.log &
>> /usr/bin/python /usr/bin/cinder-api --config-file /etc/cinder/cinder.conf
>> &> cinder-api.log &
>> /usr/bin/python /usr/bin/cinder-scheduler --config-file
>> /etc/cinder/cinder.conf &> cinder-scheduler.log &
>> /usr/bin/python /usr/bin/cinder-volume --config-file
>> /etc/cinder/cinder.conf &> cinder-volume.log &
>> /usr/bin/python /usr/bin/nova-api &> nova-api.log &
>> /usr/bin/python /opt/stack/keystone/bin/keystone-all --config-file
>> /etc/keystone/keystone.conf --log-config /etc/keystone/logging.conf -d
>> --debug &> keystone-all.log &
>> /usr/sbin/httpd -D FOREGROUND &> httpd.log &
>> /bin/sh /usr/bin/mysqld_safe --basedir=/usr &> mysqld_safe.log &
>> /usr/sbin/rabbitmq-server &> rabbitmq-server.log &
>>
>> /usr/bin/python /usr/bin/ceilometer &> ceilometer.log &
>> #start dashboard
>> systemctl start httpd
>>
>>
>> -- 原始邮件 --
>> *发件人:* "zhi";;
>> *发送时间:* 2016年8月18日(星期四) 下午5:33
>> *收件人:* "OpenStack Development Mailing List (not for usage questions)"<
>> openstack-dev@lists.openstack.org>;
>> *主题:* [openstack-dev] [devstack] How to start all OpenStack
>> servicesafter restarting system?
>>
>> hi, all.
>>
>> Currently, there is no "rejoin-stack.sh" script in devstack.
>>
>>  It will clear all resources and create all resources if I rerun
>> "./stack.sh" after restarting system.
>>
>>  So,  how to start all OpenStack services after restarting system
>> quickly?
>>
>>
>> Thanks
>> Zhi Chang
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] why common_options type is dictionary ?

2017-07-11 Thread Margin Hu

Ok, I see.

I wanted to add "--ulimit " parameter when docker run ,  It seems that I 
have to add an extra parameter for kolla_docker module in kolla_docker.py.



On 7/11 17:08, Jeffrey Zhang wrote:

there are lots of non-plain variables in kolla, dict or list in Ansible.

if you do not want to override the dict, you can add following into 
globals.yml file.


docker_common_options:
auth_email: "{{ docker_registry_email }}"
auth_password: "{{ docker_registry_password }}"
auth_registry: "{{ docker_registry }}"
auth_username: "{{ docker_registry_username }}"
environment:
  KOLLA_CONFIG_STRATEGY: "{{ config_strategy }}"
  custom_key: custom value
restart_policy: "{{ docker_restart_policy }}"
restart_retries: "{{ docker_restart_policy_retry }}"


On Tue, Jul 11, 2017 at 4:55 PM, Paul Bourke > wrote:


Because its a series of key value pairs:

https://github.com/openstack/kolla-ansible/blob/master/ansible/group_vars/all.yml#L96-L105



Is there another type you feel would fit better?


On 11/07/17 05:22, Margin Hu wrote:

Hi Guys:

I want to set docker_common_options parameter but find its
type is dictionary.  why?

ansible/roles/zun/tasks/pull.yml:5: common_options: "{{
docker_common_options }}"
tests/test_kolla_docker.py:44:
common_options=dict(required=False, type='dict', default=dict()),





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--
Regards,
Jeffrey Zhang
Blog: http://xcodest.me 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Vitrage] Collectd notification isn't shown on the Vitrage Graph

2017-07-11 Thread Mytnyk, VolodymyrX
Hi Ifat,

Thank you for investigating the issue.

The port name is unique on the graph.  The ovs port name in collectd ovs_events 
plugin is identified by the ‘plugin_instance’ notification field.

Thanks and Regards,
Volodymyr

From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.a...@nokia.com]
Sent: Tuesday, July 11, 2017 12:00 PM
To: OpenStack Development Mailing List (not for usage questions) 

Cc: Tahhan, Maryam 
Subject: Re: [openstack-dev] [Vitrage] Collectd notification isn't shown on the 
Vitrage Graph

Hi Volodymyr,

I’m working on this issue.
One question: is the port name, as defined by ‘plugin_instance’, supposed to be 
unique in the graph? If not, then how do you uniquely identify the port (in 
collectd)?

Thanks,
Ifat.

From: "Mytnyk, VolodymyrX" 
>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Friday, 7 July 2017 at 13:27
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Cc: "Tahhan, Maryam" >
Subject: Re: [openstack-dev] [Vitrage] Collectd notification isn't shown on the 
Vitrage Graph

Hi Ifat,

I’ve tested the template file modified by you with enabled debug for the 
Vitrage graph. See all Vitrage logs in the attachments.

Thank you!

Best Regards,
Volodymyr

From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.a...@nokia.com]
Sent: Friday, July 7, 2017 12:42 PM
To: OpenStack Development Mailing List (not for usage questions) 
>
Cc: Tahhan, Maryam >
Subject: Re: [openstack-dev] [Vitrage] Collectd notification isn't shown on the 
Vitrage Graph

Hi Volodymyr,

Can you please enable debug information in vitrage.conf, restart vitrage-graph, 
and send me the vitrage-graph.log file (in the time where the alarm is raised)? 
I’ll try to understand why the alarm is not connected to the port. The 
definitions in collectd_conf.yaml seem correct.

I did find some issues with the template file – in the alarm definition, you 
specified the name of the resource instead of the name/rawtext of the alarm. 
Also, the name of the port was missing in the port definition. See the attached 
template (which I haven’t checked, but I believe should work). In any case, 
this will not fix the problem with the alarm being connected to the resource; 
it is relevant only for the next phase after we fix the first problem.

Best Regards,
Ifat.

From: "Mytnyk, VolodymyrX" 
>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Friday, 7 July 2017 at 10:35
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Cc: "Tahhan, Maryam" >
Subject: Re: [openstack-dev] [Vitrage] Collectd notification isn't shown on the 
Vitrage Graph

Hi Ifat,

Sorry, I forgot to attach the topology dump. Attaching it now.

Also, I’ve checked the topology, and looks like there is no relationship 
between neutron port and the alarm for some reason.

Thanks and Regards,
Volodymyr

From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.a...@nokia.com]
Sent: Friday, July 7, 2017 12:15 AM
To: OpenStack Development Mailing List (not for usage questions) 
>
Cc: Tahhan, Maryam >
Subject: Re: [openstack-dev] [Vitrage] Collectd notification isn't shown on the 
Vitrage Graph

Hi Volodymyr,

Seems like the problem is that the alarm does not get connected to the port. In 
your collectd_conf.yaml, you should write:

collectd:
- collectd_host: silpixa00399503/ovs_events/qvo818dd156-be   (collectd resource 
name)
   type: neutron.port
   name: qvo818dd156  (openstack neutron port name)

By doing this, you cause any Collectd alarm that is raised on the Collectd 
source named silpixa00399503/ovs_events/qvo818dd156-be to be connected in 
Vitrage to a resource of type neutron.port with name qvo818dd156.

Try to look in the output of ‘vitrage topology show’ (you did not attach it to 
the mail) and see the exact details of the port.

Let me know if it helped,
Ifat.

From: "Mytnyk, VolodymyrX" 
>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Re: [openstack-dev] [mistral] CloudFlow demo available online

2017-07-11 Thread Dougal Matthews
Thanks for this - it is really useful to be able to click around and play
with the UI - gives me a much better feel for it.


On 11 July 2017 at 08:26, Shaanan, Guy (Nokia - IL/Kfar Sava) <
guy.shaa...@nokia.com> wrote:

> CloudFlow (the Mistral Workflow Visualization Tool) demo is now available
> online[1] using a real Mistral Pike instance.
>
>
>
> You can evaluate the tool by selecting one of the pre-existing executions,
> and:
>
>- See it’s YAML definition,
>- Watch the execution information,
>- Select a task and watch its properties, results and published values,
>- Select a task and see the execution path of that task,
>- Delve into a sub-workflow execution,
>- and many more…
>
>
>
> You can download the latest version from our github repo[2] and also
> report bugs and ask for features.
>
>
>
> [1] http://rawgit.com/nokia/CloudFlow/master/docs/index.html
>
> [2] https://github.com/nokia/CloudFlow
>
>
>
> *-*
>
> *Guy Shaanan*
>
> CI & Internal Tools
>
> Application & Analytics, Nokia
>
> 16 Atir Yeda St. Kfar-Saba 44643, ISRAEL
>
> T: +972 9 793 3013
>
> M: +972 52 536 2986 <+972%2052-536-2986>
>
> guy.shaa...@nokia.com
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 回复: [devstack] How to start all OpenStack servicesafter restarting system?

2017-07-11 Thread zhi
Hi, Abhishek.

I have a question about devstack. The file "stack-screenrc" doesn't exist
when I installed devstack successfully, why? I find all over the devstack
directory but I can not find it. Could you give me some advice?


Thanks
Zhi Chang

2016-08-22 16:42 GMT+08:00 wk <304702...@qq.com>:

> my scripts, runs ok:
>
> /usr/bin/python /usr/bin/glance-registry 
> --config-file=/etc/glance/glance-registry.conf
> &> glance-registry.log &
> /usr/bin/python /usr//bin/glance-api --config-file=/etc/glance/glance-api.conf
> &> glance-api.log &
> /usr/bin/python /usr/bin/nova-conductor &> nova-conductor.log &
> sg libvirtd /usr/bin/nova-compute --config-file /etc/nova/nova.conf &>
> nova-compute.log &
> /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova.conf &>
> nova-compute.log &
> /usr/bin/python /usr/bin/nova-cert &> nova-cert.log &
> /usr/bin/python /usr/bin/nova-network --config-file /etc/nova/nova.conf &>
> nova-network.log &
> /usr/bin/python /usr/bin/nova-scheduler --config-file /etc/nova/nova.conf
> &> nova-scheduler.log &
> /usr/bin/python /usr/bin/nova-novncproxy --config-file /etc/nova/nova.conf
> --web /opt/stack/noVNC &> nova-novncproxy.log &
> /usr/bin/python /usr/bin/nova-xvpvncproxy --config-file
> /etc/nova/nova.conf &> nova-xvpvncproxy.log &
> /usr/bin/python /usr/bin/nova-consoleauth &> nova-consoleauth.log &
> /usr/bin/python /usr/bin/nova-objectstore &> nova-objectstore.log &
> /usr/bin/python /usr/bin/cinder-api --config-file /etc/cinder/cinder.conf
> &> cinder-api.log &
> /usr/bin/python /usr/bin/cinder-scheduler --config-file
> /etc/cinder/cinder.conf &> cinder-scheduler.log &
> /usr/bin/python /usr/bin/cinder-volume --config-file
> /etc/cinder/cinder.conf &> cinder-volume.log &
> /usr/bin/python /usr/bin/nova-api &> nova-api.log &
> /usr/bin/python /opt/stack/keystone/bin/keystone-all --config-file
> /etc/keystone/keystone.conf --log-config /etc/keystone/logging.conf -d
> --debug &> keystone-all.log &
> /usr/sbin/httpd -D FOREGROUND &> httpd.log &
> /bin/sh /usr/bin/mysqld_safe --basedir=/usr &> mysqld_safe.log &
> /usr/sbin/rabbitmq-server &> rabbitmq-server.log &
>
> /usr/bin/python /usr/bin/ceilometer &> ceilometer.log &
> #start dashboard
> systemctl start httpd
>
>
> -- 原始邮件 --
> *发件人:* "zhi";;
> *发送时间:* 2016年8月18日(星期四) 下午5:33
> *收件人:* "OpenStack Development Mailing List (not for usage questions)"<
> openstack-dev@lists.openstack.org>;
> *主题:* [openstack-dev] [devstack] How to start all OpenStack servicesafter
> restarting system?
>
> hi, all.
>
> Currently, there is no "rejoin-stack.sh" script in devstack.
>
>  It will clear all resources and create all resources if I rerun
> "./stack.sh" after restarting system.
>
>  So,  how to start all OpenStack services after restarting system
> quickly?
>
>
> Thanks
> Zhi Chang
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] failed to download image from TryStack

2017-07-11 Thread Jeffrey Zhang
first of all, this site is not trystack, it is tarballs.openstack.org.

I asked openstack infra team. got following feedback

> this was disabled yesterday since the images produced massive load. fungi
started putting up a caching proxy for these.

This is disabled and will be enabled in the future.

On Tue, Jul 11, 2017 at 9:22 AM, Margin Hu  wrote:

> Hi Guys
>
> I want to try community image , bug failed to download.
>
> [root@server opt]# wget https://tarballs.openstack.org
> /kolla/images/centos-source-registry-ocata.tar.gz
> --2017-07-11 09:12:37-- https://tarballs.openstack.org
> /kolla/images/centos-source-registry-ocata.tar.gz
> Resolving tarballs.openstack.org (tarballs.openstack.org)...
> 23.253.108.137, 2001:4800:7817:104:be76:4eff:fe05:dbee
> Connecting to tarballs.openstack.org 
> (tarballs.openstack.org)|23.253.108.137|:443...
> connected.
> HTTP request sent, awaiting response... 403 Forbidden
> 2017-07-11 09:12:37 ERROR 403: Forbidden.
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] why common_options type is dictionary ?

2017-07-11 Thread Jeffrey Zhang
there are lots of non-plain variables in kolla, dict or list in Ansible.

if you do not want to override the dict, you can add following into
globals.yml file.

docker_common_options:
auth_email: "{{ docker_registry_email }}"
auth_password: "{{ docker_registry_password }}"
auth_registry: "{{ docker_registry }}"
auth_username: "{{ docker_registry_username }}"
environment:
  KOLLA_CONFIG_STRATEGY: "{{ config_strategy }}"
  custom_key: custom value
restart_policy: "{{ docker_restart_policy }}"
restart_retries: "{{ docker_restart_policy_retry }}"


On Tue, Jul 11, 2017 at 4:55 PM, Paul Bourke  wrote:

> Because its a series of key value pairs: https://github.com/openstack/k
> olla-ansible/blob/master/ansible/group_vars/all.yml#L96-L105
>
> Is there another type you feel would fit better?
>
>
> On 11/07/17 05:22, Margin Hu wrote:
>
>> Hi Guys:
>>
>> I want to set docker_common_options parameter but find its type is
>> dictionary.  why?
>>
>> ansible/roles/zun/tasks/pull.yml:5:common_options: "{{
>> docker_common_options }}"
>> tests/test_kolla_docker.py:44: common_options=dict(required=False,
>> type='dict', default=dict()),
>>
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Vitrage] Collectd notification isn't shown on the Vitrage Graph

2017-07-11 Thread Afek, Ifat (Nokia - IL/Kfar Sava)
Hi Volodymyr,

I’m working on this issue.
One question: is the port name, as defined by ‘plugin_instance’, supposed to be 
unique in the graph? If not, then how do you uniquely identify the port (in 
collectd)?

Thanks,
Ifat.

From: "Mytnyk, VolodymyrX" 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Friday, 7 July 2017 at 13:27
To: "OpenStack Development Mailing List (not for usage questions)" 

Cc: "Tahhan, Maryam" 
Subject: Re: [openstack-dev] [Vitrage] Collectd notification isn't shown on the 
Vitrage Graph

Hi Ifat,

I’ve tested the template file modified by you with enabled debug for the 
Vitrage graph. See all Vitrage logs in the attachments.

Thank you!

Best Regards,
Volodymyr

From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.a...@nokia.com]
Sent: Friday, July 7, 2017 12:42 PM
To: OpenStack Development Mailing List (not for usage questions) 

Cc: Tahhan, Maryam 
Subject: Re: [openstack-dev] [Vitrage] Collectd notification isn't shown on the 
Vitrage Graph

Hi Volodymyr,

Can you please enable debug information in vitrage.conf, restart vitrage-graph, 
and send me the vitrage-graph.log file (in the time where the alarm is raised)? 
I’ll try to understand why the alarm is not connected to the port. The 
definitions in collectd_conf.yaml seem correct.

I did find some issues with the template file – in the alarm definition, you 
specified the name of the resource instead of the name/rawtext of the alarm. 
Also, the name of the port was missing in the port definition. See the attached 
template (which I haven’t checked, but I believe should work). In any case, 
this will not fix the problem with the alarm being connected to the resource; 
it is relevant only for the next phase after we fix the first problem.

Best Regards,
Ifat.

From: "Mytnyk, VolodymyrX" 
>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Friday, 7 July 2017 at 10:35
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Cc: "Tahhan, Maryam" >
Subject: Re: [openstack-dev] [Vitrage] Collectd notification isn't shown on the 
Vitrage Graph

Hi Ifat,

Sorry, I forgot to attach the topology dump. Attaching it now.

Also, I’ve checked the topology, and looks like there is no relationship 
between neutron port and the alarm for some reason.

Thanks and Regards,
Volodymyr

From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.a...@nokia.com]
Sent: Friday, July 7, 2017 12:15 AM
To: OpenStack Development Mailing List (not for usage questions) 
>
Cc: Tahhan, Maryam >
Subject: Re: [openstack-dev] [Vitrage] Collectd notification isn't shown on the 
Vitrage Graph

Hi Volodymyr,

Seems like the problem is that the alarm does not get connected to the port. In 
your collectd_conf.yaml, you should write:

collectd:
- collectd_host: silpixa00399503/ovs_events/qvo818dd156-be   (collectd resource 
name)
   type: neutron.port
   name: qvo818dd156  (openstack neutron port name)

By doing this, you cause any Collectd alarm that is raised on the Collectd 
source named silpixa00399503/ovs_events/qvo818dd156-be to be connected in 
Vitrage to a resource of type neutron.port with name qvo818dd156.

Try to look in the output of ‘vitrage topology show’ (you did not attach it to 
the mail) and see the exact details of the port.

Let me know if it helped,
Ifat.

From: "Mytnyk, VolodymyrX" 
>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Thursday, 6 July 2017 at 23:59
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Cc: "Tahhan, Maryam" >
Subject: Re: [openstack-dev] [Vitrage] Collectd notification isn't shown on the 
Vitrage Graph

Hi Ifat,

Thank you for your help. See my response on your questions 
below:


· You see the neutron port in your entity graph, and it is connected to 
the VM
V:  yes, it’s correct.


· You see the alarm on the alarms view, and its resources is the 
neutron port
V: I see the alarm, sent by collectd , on the alarm page (or vitrage alarm 
list).  Not sure how to check the alarm resources?


· You 

Re: [openstack-dev]  [kolla][kolla-ansible] ceph_osd error when startup ceph_osd container.

2017-07-11 Thread Paul Bourke

Hi,

This usually means your ceph mons have failed to cluster correctly. The 
osd bootstrap calls 'ceph quorum_status' to ensure a successful quorum 
before proceeding. If the mons are blocked or otherwise unavailable this 
command can time out which is likely what you're seeing. Make sure the 
ceph_mon containers have started correctly, you have the correct amount 
of them, etc.


Regards,
-Paul

On 11/07/17 02:47, zhou...@zte.com.cn wrote:

Hi kolla-ansible team:

 I have met a weird problem when start up the ceph_osd container。

CONTAINER IDIMAGE   
   COMMAND CREATED STATUS   
  PORTS   NAMES


64b4617ff50210.20.11.2:4000/kolla/centos-binary-ceph-osd:4.0.2   
  "kolla_start"   13 hours agoExited (1) 13 hours ago   
 bootstrap_osd_0



docker logs 64b4617ff502

2017-04-28 16:32:51.854980 7f6e3795b700  0 monclient(hunting): 
authenticate timed out after 300


2017-04-28 16:32:51.855032 7f6e3795b700  0 librados: client.admin 
authentication error (110) Connection timed out


Error connecting to cluster: TimedOut


ceph version 10.2.7 (50e863e0f4bc8f4b9e31156de690d765af245185)


have you ever met this problem?And I am appreciate it if you give me 
some help,thank you very much.



B.R.

zhouya


周亚


IT开发工程师 IT Development Engineer
虚拟化南京三部/无线研究院/无线产品经营部 NIV Nanjing Dept. III/Wireless 
Product R&D Institute/Wireless Product Operation Division




南京市雨花台区花神大道6号中兴通讯一区二期5楼A区
A District, 5/F, R Building, ZTE Corporation Plaza,#6 Huashen Ave.
Yuhuatai District, Nanjing, P..R.China, 210012
T: +86 13951010061 M: +86 13772010248
E: zhou...@zte.com.cn
www.zte.com.cn 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] why common_options type is dictionary ?

2017-07-11 Thread Paul Bourke
Because its a series of key value pairs: 
https://github.com/openstack/kolla-ansible/blob/master/ansible/group_vars/all.yml#L96-L105


Is there another type you feel would fit better?

On 11/07/17 05:22, Margin Hu wrote:

Hi Guys:

I want to set docker_common_options parameter but find its type is 
dictionary.  why?


ansible/roles/zun/tasks/pull.yml:5:common_options: "{{ 
docker_common_options }}"
tests/test_kolla_docker.py:44: common_options=dict(required=False, 
type='dict', default=dict()),





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [masakari]Remove ERROR instances from recovery targets

2017-07-11 Thread Rikimaru Honjo

Hello all,

Current Masakari also rescues ERROR instances when host failure happen.
Those instances will be changed to ACTIVE after rescued.[1]

But I think that some users don't want to rescue ERROR instances.
For example, if user is running 1ACT/n SBY application on instances,
launching ERROR instances will cause unexpected effect.

So I want to add a configurable option.
ERROR instances won't be rescued if the option is set.

Please talk your opinion about this issue.

P.S.
I talked about this issue in IRC meeting.
http://eavesdrop.openstack.org/meetings/masakari/2017/masakari.2017-07-11-04.00.log.html
But time was up at that time.

[1]
This is Evacuate API's behavior.

[2]
There is a possibility that following patch resolve this issue,
but that will take time.
https://review.openstack.org/#/c/469029/

Best regards,
--
_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
Rikmaru Honjo
E-mail:honjo.rikim...@po.ntt-tx.co.jp



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] CloudFlow demo available online

2017-07-11 Thread Shaanan, Guy (Nokia - IL/Kfar Sava)
CloudFlow (the Mistral Workflow Visualization Tool) demo is now available 
online[1] using a real Mistral Pike instance.

You can evaluate the tool by selecting one of the pre-existing executions, and:

  *   See it's YAML definition,
  *   Watch the execution information,
  *   Select a task and watch its properties, results and published values,
  *   Select a task and see the execution path of that task,
  *   Delve into a sub-workflow execution,
  *   and many more...

You can download the latest version from our github repo[2] and also report 
bugs and ask for features.

[1] http://rawgit.com/nokia/CloudFlow/master/docs/index.html
[2] https://github.com/nokia/CloudFlow

-
Guy Shaanan
CI & Internal Tools
Application & Analytics, Nokia
16 Atir Yeda St. Kfar-Saba 44643, ISRAEL
T: +972 9 793 3013
M: +972 52 536 2986
guy.shaa...@nokia.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev