Re: [openstack-dev] [neutron]CIDR overlapping

2017-02-13 Thread joehuang
O, it's being set to "True" in devstack installation. Thanks for clarification.

Best Regards
Chaoyi Huang (joehuang)

From: Kevin Benton [ke...@benton.pub]
Sent: 14 February 2017 12:17
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron]CIDR overlapping

Do you have allow_overlapping_ips set to True?

On Mon, Feb 13, 2017 at 7:56 PM, joehuang 
> wrote:
Hello,

During the regression test, I find that Neutron allows CIDR overlapping in same 
project.

neutron --os-region-name RegionOne net-create dup-net1
neutron --os-region-name RegionOne net-create dup-net2
neutron --os-region-name RegionOne subnet-create dup-net1 
20.0.1.0/24
neutron --os-region-name RegionOne subnet-create dup-net2 
20.0.1.0/24

all commands can be executed successfully, and after then the project has two 
subnets with same CIDR: 20.0.1.0/24

Should it be allowed? Or it's just because it's too difficult to address the 
race condition.

Best Regards
Chaoyi Huang (joehuang)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tricircle] Heading for PTG

2017-02-13 Thread Vega Cai
Hi folks,

I am heading for Atlanta PTG as the representative of Tricircle project, if
you are interested in Tricircle project and will also attend Atlanta PTG,
we can meet and discuss some topics there.

BR
Zhiyuan
-- 
BR
Zhiyuan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][tripleo][mistral][all]PTG: Cross-Project: OpenStack Orchestration Integration

2017-02-13 Thread Renat Akhmerov
Ok, thanks Rico. Keep us updated on the schedule changes. I’m currently 
finalizing the schedule for Mistral, for now I’ll leave Thursday 11.00-12.00 
blank.

Renat Akhmerov
@Nokia

> On 14 Feb 2017, at 10:49, Rico Lin  wrote:
> 
> Let's move the schedule toThursday morning 11:00 am to 12:00 pm in the same 
> room. Hope that schedule work for all:)
> 
> Also, I will ask if Mon - Tue is available or not, but there might be more 
> cross-project sessions we have to not to conflict with them. 
> 
> 2017-02-14 10:02 GMT+08:00 Renat Akhmerov  >:
>> On 13 Feb 2017, at 19:30, Emilien Macchi > > wrote:
>> 
>> 
>> 
>> On Mon, Feb 13, 2017 at 4:48 AM, Rico Lin > > wrote:
>> Dear all
>> 
>> PTG is approaching, we have few ideas around TripleO team ([1] and [2]) 
>> about use case like using Mistral through Heat. It seems some great 
>> OpenStacker already start thing about how the Orchestration services (Heat, 
>> Mistral, and some other projects) could use together for a better developer 
>> or operator experiences. First, of curse, 
>> we will arrange a fishbowl design session on Wednesday morning.
>> Let's settle with 10:00 am to 10:50 am at Macon (on level2) for now. 
>> Could teams kindly help to make sure they can attend this cross project 
>> session or need it reschedule?
>> 
>> Can we reschedule it? It seems like the only slot where we have sessions 
>> organized is on Wednesday morning, for our container work:
>> https://etherpad.openstack.org/p/tripleo-ptg-pike 
>> 
>> 
>> Wednesday 9:00 Cross-Teams talk about containers and networking
>> Wednesday 10:00: TripleO Containers status update and path forward
>> 
>> So I suggest Wednesday afternoon or Thursday or Friday morning. At your 
>> convenience.
> 
> 
> Thursday morning would work for me.
> 
> Renat Akhmerov
> @Nokia
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> 
> 
> -- 
> May The Force of OpenStack Be With You, 
> Rico Lin
> irc: ricolin
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Meeting at 20:00 UTC this Wednesday, 15th February

2017-02-13 Thread Richard Jones
Hi folks,

The Horizon team will be having our next meeting at 20:00 UTC this
Wednesday, 15th February in #openstack-meeting-3

Meeting agenda is here: https://wiki.openstack.org/wiki/Meetings/Horizon

Anyone is welcome to to add agenda items and everyone interested in
Horizon is encouraged to attend. As the PTG approaches it'd be great
to get your thoughts about Pike planning into the etherpad[1], and please
feel free to discuss such planning at the weekly meeting!


Cheers,

Richard

[1] https://etherpad.openstack.org/p/horizon-ptg-pike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Blueprints for DPDK in OvS2.6

2017-02-13 Thread Saravanan KR
Oops. Forgot to update, We have changed the name of the BP as the
"dot" in the BP name was not going well with the gerrit reviews.

The new URLs are (dot changed to dash):
  https://blueprints.launchpad.net/tripleo/+spec/ovs-2-6-dpdk
  https://blueprints.launchpad.net/tripleo/+spec/ovs-2-6-features-dpdk

And thanks for the confirmation.

Regards,
Saravanan KR

On Mon, Feb 13, 2017 at 9:36 PM, Emilien Macchi  wrote:
> On Wed, Feb 8, 2017 at 1:59 AM, Saravanan KR  wrote:
>> Hello,
>>
>> We have raised 2 BP for OvS2.6 integration with DPDK support.
>>
>> Basic Migration -
>> https://blueprints.launchpad.net/tripleo/+spec/ovs-2.6-dpdk (Targeted
>> for March)
>> OvS 2.6 Features -
>> https://blueprints.launchpad.net/tripleo/+spec/ovs-2.6-features-dpdk
>> (Targeted for Pike)
>
> Both links are 404, any idea of what happenned?
>
> Other than that, I don't see any blocker to have these blueprints in Pike 
> cycle.
>
> Thanks!
>
>> We find the changes to be straight forward and minor. And the required
>> changes has been updated on the BP description. Please let us know if
>> it requires a spec.
>>
>> Regards,
>> Saravanan KR
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][requirements] Eventlet verion bump coming?

2017-02-13 Thread Tony Breeds
Hi All,
So there is a new version of eventlet out and we refused to bump it late in
the ocata cycle but now that we're early in the pike cycle I think we're okay
to do it.  The last time[1] we tried to bump eventlet it was pretty rocky and we
decided that we'd need a short term group of people focused on testing the new
bump rather than go through the slightly painful:

 1: Bump eventlet version
 2: Find and file bugs
 3: Revert
 4: Wait for next release
 goto 1

process.  So can we get a few people together to map this out?  I'd like to try 
it
shortly after the PTG?

From an implementation POV I'd like to bump the upper-constraint and let that
sit for a while before we touch global-requirements.txt

Youre Tony.

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2016-February/thread.html#86745


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Oslo] Oslo log files

2017-02-13 Thread ChangBo Guo
We got  cool logos in different format, just use them :-)
I’m excited to finally be able to share final project logo files with you.
Inside this folder, you’ll find full-color and one-color versions of the
logo, plus a mascot-only version, in EPS, JPG and PNG formats. You can use
them on presentations and wherever else you’d like to add some team flair.

https://www.dropbox.com/sh/kj0e3sdu47pqr3e/AABllB31vJZDlw4OkZRK_AZia?dl=0

-- 
ChangBo Guo(gcb)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Alternative approaches for L3 HA

2017-02-13 Thread Kosnik, Lubosz
So from my perspective I can tell that problem is completely in architecture 
and even without something outside of Neutron we cannot solve that.
Two releases ago I started to work on hardening that feature but all my ideas 
was killed by Armando and Assaf. The decided that adding outside dependency 
will open the doors for a new bugs from dependencies into Neutron [1].

You need to know that there are two outstanding bugs in this feature. There is 
a internal and outside connectivity split brain. [2] this patch made by me is 
“fixing” part of the problem. It allows you specify additional tests to verify 
connectivity from router to GW.
Also there is a problem with connectivity between network nodes. It’s more 
problematic and like you said it’s unsolvable in my opinion without using 
external mechanism.

If there will be any need to help with anything I would love to help with 
sharing my knowledge about this feature and what exactly is not working. If 
anyone needs any help with anything about this please ping me on email or IRC.

[1] https://bugs.launchpad.net/neutron/+bug/1375625/comments/31
[2] https://review.openstack.org/#/c/273546/

Lubosz

On Feb 13, 2017, at 4:10 AM, Anna Taraday 
> wrote:

To avoid dependency of data plane on control plane it is possible to deploy a 
separate key-value storage cluster on data plane side, using the same network 
nodes.
I'm proposing to make some changes to enable experimentation in this field, we 
are yet to come up with any other concrete solution.

On Mon, Feb 13, 2017 at 2:01 PM 
> wrote:

Hi,





We also operate using Juno with the VRRP HA implementation and at had to patch 
through several bugs before getting to the Mitaka release.

An pluggable, drop-in alternative would be highly appreciated. However our 
experience has been that the decoupling of VRRP from the control plane is 
actually a benefit as when the control plane is down the traffic is not 
affected.

In a solution where the L3 HA implementation becomes tied to the availability 
of the control plane (etcd cluster or any other KV store) then an operator 
would have to account for extra failure scenarios for the KV store which would 
affect multiple routers than the outage of a single L3 node which is the case 
we usually have to account now.





Just my $.02



Cristian



From: Anna Taraday 
[mailto:akamyshnik...@mirantis.com]
Sent: Monday, February 13, 2017 11:45 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] Alternative approaches for L3 HA



In etcd for each HA router we can store key which will identify which agent is 
active. L3 agents will "watch" this key.
All these tools have leader election mechanism which can be used to get agent 
which is active for current HA router.



On Mon, Feb 13, 2017 at 7:02 AM zhi 
> wrote:

Hi, we are using L3 HA in our production environment now. Router instances 
communicate to each other by VRRP protocol. In my opinion, although VRRP is a 
control plane thing, but the real VRRP traffic is using data plane nic so that 
router namespaces can not talk to each other sometimes when the  data plan is 
busy. If we were used etcd (or other), does every router instance register one 
"id" in etcd ?





Thanks

Zhi Chang

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--

Regards,
Ann Taraday

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.


__
OpenStack Development Mailing List (not for usage 

[openstack-dev] Call for mentors and funding - Outreachy

2017-02-13 Thread Mahati C
Hello everyone,

An update on the Outreachy program, including a request for volunteer
mentors and funding. For those of you who are not aware, Outreachy helps
people from underrepresented groups get involved in free and open source
software  by matching interns with established mentors in the upstream
community. For more info, please visit:
https://wiki.openstack.org/wiki/Outreachy

We so far have a confirmation for three spots for OpenStack in this round
of Outreachy. But we are receiving more applicants who are interested in
contributing to different OpenStack projects. Interested mentors - please
publish your project ideas to this page
https://wiki.openstack.org/wiki/Internship_ideas. Here is a link that helps
you get acquainted with mentorship process:
https://wiki.openstack.org/wiki/Outreachy/Mentors

We are looking for additional sponsors to help support the increase in
OpenStack applicants. The sponsorship cost is 6,500 USD per intern, which
is used to provide them a stipend for the three-month program. You can
learn more about sponsorship here:
https://wiki.gnome.org/Outreachy/Admin/InfoForOrgs#Action

Outreachy has been one of the most important and effective diversity
efforts we’ve invested in. It has evidently been a way to retain new
contributors, we’ve had some amazing participants become long-term
contributors to our community.

Please help spread the word. If you are interested in becoming a mentor or
sponsoring an intern, please contact me (mahati.chamarthy AT intel.com) or
Victoria (victoria AT redhat.com).

Thanks,
Mahati
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]CIDR overlapping

2017-02-13 Thread Kevin Benton
Do you have allow_overlapping_ips set to True?

On Mon, Feb 13, 2017 at 7:56 PM, joehuang  wrote:

> Hello,
>
> During the regression test, I find that Neutron allows CIDR overlapping in
> same project.
>
> neutron --os-region-name RegionOne net-create dup-net1
> neutron --os-region-name RegionOne net-create dup-net2
> neutron --os-region-name RegionOne subnet-create dup-net1 20.0.1.0/24
> neutron --os-region-name RegionOne subnet-create dup-net2 20.0.1.0/24
>
> all commands can be executed successfully, and after then the project has
> two subnets with same CIDR: 20.0.1.0/24
>
> Should it be allowed? Or it's just because it's too difficult to address
> the race condition.
>
> Best Regards
> Chaoyi Huang (joehuang)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][tripleo][mistral][all]PTG: Cross-Project: OpenStack Orchestration Integration

2017-02-13 Thread Rico Lin
Let's move the schedule toThursday morning 11:00 am to 12:00 pm in the same
room. Hope that schedule work for all:)

Also, I will ask if Mon - Tue is available or not, but there might be more
cross-project sessions we have to not to conflict with them.

2017-02-14 10:02 GMT+08:00 Renat Akhmerov :

> On 13 Feb 2017, at 19:30, Emilien Macchi  wrote:
>
>
>
> On Mon, Feb 13, 2017 at 4:48 AM, Rico Lin 
> wrote:
>
>> Dear all
>>
>> PTG is approaching, we have few ideas around TripleO team ([1] and [2])
>> about use case like using Mistral through Heat. It seems some great
>> OpenStacker already start thing about how the Orchestration services (Heat,
>> Mistral, and some other projects) could use together for a better developer
>> or operator experiences. First, of curse,
>> we will arrange a fishbowl design session on Wednesday morning.
>> Let's settle with 10:00 am to 10:50 am at Macon (on level2) for now.
>> Could teams kindly help to make sure they can attend this cross project
>> session or need it reschedule?
>>
>
> Can we reschedule it? It seems like the only slot where we have sessions
> organized is on Wednesday morning, for our container work:
> https://etherpad.openstack.org/p/tripleo-ptg-pike
>
> Wednesday 9:00 Cross-Teams talk about containers and networking
> Wednesday 10:00: TripleO Containers status update and path forward
>
> So I suggest Wednesday afternoon or Thursday or Friday morning. At your
> convenience.
>
>
> Thursday morning would work for me.
>
> Renat Akhmerov
> @Nokia
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
May The Force of OpenStack Be With You,

*Rico Lin*irc: ricolin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Neutron logo files

2017-02-13 Thread Kevin Benton
We got some cool new logos. Check them out!
-- Forwarded message --
From: "Heidi Joy Tretheway" 
Date: Feb 13, 2017 15:02
Subject: Neutron logo files
To: , "Armando M." 
Cc:

Hi Armando and Kevin,

I’m excited to finally be able to share final project logo files with you.
Inside this folder, you’ll find full-color and one-color versions of the
logo, plus a mascot-only version, in EPS, JPG and PNG formats. You can use
them on presentations and wherever else you’d like to add some team flair.

https://www.dropbox.com/sh/9nzvr9nxzo7w9zw/AACdyh1flHgQWc37dwdxnfWda?dl=0

At the PWG, we’ll have stickers for your team of the mascot, plus signage
on your room. I’m especially excited for the project teams to see all of
the logos together as one group, because they work beautifully together
stylistically while making each project’s mark distinctive. Feel free to
share this with your team, and thanks to you and to them for the hard work
they put into reaching an agreement on the mascot. Also feel free to direct
any questions my way!


[image: photo]
*Heidi Joy Tretheway*
Senior Marketing Manager, OpenStack Foundation
503 816 9769 | Skype: heidi.tretheway

  
  
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][magnum][murano][sahara][tripleo][mistral][all]PTG: Cross-Project:Orchestration feedback and announcement

2017-02-13 Thread Rico Lin
>
>
> Aren’t we supposed to have all cross-project sessions on Mon-Tue?
>

That is an interesting way to do it, althrough the cross project effort
mentioned by PTG is refering to release goals, Architecture workgroup, any
other workgroups. I will ask if those time is available for this kind of
cross projects.


> The time slot you mentioned is kind of ok, I can be there (although
> earlier would be better) but starting Wed we all have time dedicated to
> specific project discussions.
>

Thanks for the confirms


> Renat Akhmerov
> @Nokia
>
> On 13 Feb 2017, at 17:38, Rico Lin  wrote:
>
> Dear all
>
> We would like to have a Cross Project fishbowl session about Orchestration
> feedback and announcement.
> We would like to help on any improvement that will potentially help other
> projects.
> That's why we need your feedback.
> Heat has landed some cool improvement like reduce 60% of memory usage in
> the last cycle, stable convergence engine, etc. Therefore, we would like to
> check with teams if those nice features can be integrated and enabled
> within their project. If not, which goal we still required to make it
> happen?
>
>
> *Let's schedule this session in Macon(on level2) at 11:00 am - 12:00 pm on
> Wednesday Morning for now.*
> Could teams kindly help to make sure they can attend this cross project
> session or need it reschedule?
> Hopefully, all schedule for teams does not conflict with this schedule.
> If the schedule is a perfect fit for all teams and you feel like this is
> part of your concerns, then we hope to see you all there:)
>
> --
> May The Force of OpenStack Be With You,
>
> *Rico Lin*irc: ricolin
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
May The Force of OpenStack Be With You,

*Rico Lin*irc: ricolin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron]CIDR overlapping

2017-02-13 Thread joehuang
Hello,

During the regression test, I find that Neutron allows CIDR overlapping in same 
project.

neutron --os-region-name RegionOne net-create dup-net1
neutron --os-region-name RegionOne net-create dup-net2
neutron --os-region-name RegionOne subnet-create dup-net1 20.0.1.0/24
neutron --os-region-name RegionOne subnet-create dup-net2 20.0.1.0/24

all commands can be executed successfully, and after then the project has two 
subnets with same CIDR: 20.0.1.0/24

Should it be allowed? Or it's just because it's too difficult to address the 
race condition.

Best Regards
Chaoyi Huang (joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Mascot

2017-02-13 Thread Michał Jastrzębski
And here we are, our mascot:) Feel free to use it in any Kolla related
presentation or tattoo studio.


https://www.dropbox.com/sh/94pukquiw425ji2/AAAl2wVmm72KHPLCAzR6Uboha?dl=0

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Project mascot available

2017-02-13 Thread Sean McGinnis
For your viewing and slide designing pleasure...

We have the official mascot completed from the illustrators. Multiple formats
are available from here:

https://www.dropbox.com/sh/8s3859c6qulu1m3/AABu_rIyuBM_bZmfGkagUXhua?dl=0

Stay golden, pony boy.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Fwd: TripleO mascot - how can I help your team?

2017-02-13 Thread Emilien Macchi
Team, I've got this email from Heidi.

I see 3 options :

1. Keep existing logo: http://tripleo.org/_static/tripleo_owl.svg .

2. Re-design a new logo that "meets" OpenStack "requirements".

3. Pick-up the one proposed (see below).


Personally, I would vote for keeping our existing logo (1.) unless someone
has time to create another one or if the team likes the proposed one.

The reason why I want to keep our logo is because our current logo was
created by TripleO devs, we like it and we already have tee-shirts and
other goodies with it. I don't see any good reason to change it.

Discussion is open and we'll vote as a team.

Thanks,

Emilien.

-- Forwarded message --
From: Heidi Joy Tretheway 
Date: Mon, Feb 13, 2017 at 8:27 PM
Subject: TripleO mascot - how can I help your team?
To: Emilien Macchi 


Hi Emilien,

I’m following up on the much-debated TripleO logo. I’d like to help your
team reach a solution that makes them happy but still fits within the
family of logos we’re using at the PTG and going forward. Here’s what our
illustrators came up with, which hides an “O” shape in the owl (face and
wing arcs).

https://www.dropbox.com/sh/qz45miiiam3caiy/AAAzPGYEZRMGH6Otid3bLfHFa?dl=0
At this point, I don’t have quorum from your team (I got a lot of
conflicting feedback, most of which was “don’t like” but not actionable for
the illustrators to make a revision). At the PTG, we’ll have mascot
stickers and signage for all teams except for Ironic and TripleO, since
we’re still waiting on your teams to make a final decision.

May I recommend that your team choose one person (or a small group of no
more than three) to finalize this? I was able to work through all of
Swift’s issues with just a quick 15-minute chat with John Dickinson and I’d
like to believe we can solve this for TripleO as well.

We know some of your team has expressed concern over retiring the existing
mascot. It’s not our intention to make anyone “get rid of” a beloved icon.
Your team can certainly print it on vintage items like shirts and stickers.
But for official channels like the website, we need a logo to represent
TripleO that’s cohesive with the rest of the set.

Perhaps when you’re face to face with your team at the PTG, you can discuss
and hopefully render a final decision to either accept this as a logo, or
determine a few people willing to make any final changes with me?

Thanks in advance for your help!


[image: photo]
*Heidi Joy Tretheway*
Senior Marketing Manager, OpenStack Foundation
503 816 9769 | Skype: heidi.tretheway

  
  






-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] mascot

2017-02-13 Thread Steve Martinelli
this looks great!

On Mon, Feb 13, 2017 at 9:28 PM, Lance Bragstad  wrote:

> Good news! We just got the final revision for our official keystone mascot
> [0]!
>
> I have a note on my todo list to put together a basic chart deck with
> them. I'll send out a link for folks to use when I get them done.
>
> [0] https://www.dropbox.com/sh/0owldvy0u5y4yk9/AAB5Q95wYj-
> oaiisneKbnEiDa?dl=0
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] mascot

2017-02-13 Thread Lance Bragstad
Good news! We just got the final revision for our official keystone mascot
[0]!

I have a note on my todo list to put together a basic chart deck with them.
I'll send out a link for folks to use when I get them done.

[0] https://www.dropbox.com/sh/0owldvy0u5y4yk9/AAB5Q95wYj-
oaiisneKbnEiDa?dl=0
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][tripleo][mistral][all]PTG: Cross-Project: OpenStack Orchestration Integration

2017-02-13 Thread Renat Akhmerov
> On 13 Feb 2017, at 19:30, Emilien Macchi  wrote:
> 
> 
> 
> On Mon, Feb 13, 2017 at 4:48 AM, Rico Lin  > wrote:
> Dear all
> 
> PTG is approaching, we have few ideas around TripleO team ([1] and [2]) about 
> use case like using Mistral through Heat. It seems some great OpenStacker 
> already start thing about how the Orchestration services (Heat, Mistral, and 
> some other projects) could use together for a better developer or operator 
> experiences. First, of curse, 
> we will arrange a fishbowl design session on Wednesday morning.
> Let's settle with 10:00 am to 10:50 am at Macon (on level2) for now. 
> Could teams kindly help to make sure they can attend this cross project 
> session or need it reschedule?
> 
> Can we reschedule it? It seems like the only slot where we have sessions 
> organized is on Wednesday morning, for our container work:
> https://etherpad.openstack.org/p/tripleo-ptg-pike 
> 
> 
> Wednesday 9:00 Cross-Teams talk about containers and networking
> Wednesday 10:00: TripleO Containers status update and path forward
> 
> So I suggest Wednesday afternoon or Thursday or Friday morning. At your 
> convenience.


Thursday morning would work for me.

Renat Akhmerov
@Nokia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][magnum][murano][sahara][tripleo][mistral][all]PTG: Cross-Project:Orchestration feedback and announcement

2017-02-13 Thread Renat Akhmerov
Hi Rico,

Aren’t we supposed to have all cross-project sessions on Mon-Tue?

The time slot you mentioned is kind of ok, I can be there (although earlier 
would be better) but starting Wed we all have time dedicated to specific 
project discussions.

Renat Akhmerov
@Nokia

> On 13 Feb 2017, at 17:38, Rico Lin  wrote:
> 
> Dear all
> 
> We would like to have a Cross Project fishbowl session about Orchestration 
> feedback and announcement.
> We would like to help on any improvement that will potentially help other 
> projects.
> That's why we need your feedback.
> Heat has landed some cool improvement like reduce 60% of memory usage in the 
> last cycle, stable convergence engine, etc. Therefore, we would like to check 
> with teams if those nice features can be integrated and enabled within their 
> project. If not, which goal we still required to make it happen?
> 
> 
> Let's schedule this session in Macon(on level2) at 11:00 am - 12:00 pm on 
> Wednesday Morning for now.
> Could teams kindly help to make sure they can attend this cross project 
> session or need it reschedule?
> Hopefully, all schedule for teams does not conflict with this schedule.
> If the schedule is a perfect fit for all teams and you feel like this is part 
> of your concerns, then we hope to see you all there:)
> 
> -- 
> May The Force of OpenStack Be With You, 
> Rico Lin
> irc: ricolin
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral][logo][mascot] Fwd: Mistral logo files

2017-02-13 Thread Renat Akhmerov
Hi, our logos are finally ready!

Renat Akhmerov
@Nokia

> Begin forwarded message:
> 
> From: Heidi Joy Tretheway 
> Subject: Mistral logo files
> Date: 14 February 2017 at 05:00:56 GMT+7
> To: Renat Akhmerov 
> 
> Hi Renat, 
> 
> I’m excited to finally be able to share final project logo files with you. 
> Inside this folder, you’ll find full-color and one-color versions of the 
> logo, plus a mascot-only version, in EPS, JPG and PNG formats. You can use 
> them on presentations and wherever else you’d like to add some team flair. 
> 
> https://www.dropbox.com/sh/ngoqm9zazohwplb/AAAcgv3JzFE9isrLiU8SL7s4a?dl=0 
> 
> 
> At the PWG, we’ll have stickers for your team of the mascot, plus signage on 
> your room. I’m especially excited for the project teams to see all of the 
> logos together as one group, because they work beautifully together 
> stylistically while making each project’s mark distinctive. Feel free to 
> share this with your team, and thanks to you and to them for the hard work 
> they put into reaching an agreement on the mascot. Also feel free to direct 
> any questions my way!
> 
> 
>   
> Heidi Joy Tretheway
> Senior Marketing Manager, OpenStack Foundation
> 503 816 9769  | Skype: heidi.tretheway 
> 
>     
>  
> 
> 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron]Find a property way to face agent rpc message lost.

2017-02-13 Thread bo zhaobo
Currently, neutron agent report its state through rpc. The message will be
fetched by neutron server and store the latest info into db.

There may be some problems found in the scenarios of  public cloud.[1].
Neutron & its agents and nova & its services(nova compute) may hit the same
problem. So is there any kind people could give nice suggesions? :) thanks.

[1] https://bugs.launchpad.net/neutron/+bug/1664299
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [zaqar] Zaqar logo files

2017-02-13 Thread Fei Long Wang
Hi team,

Here is the final version of Zaqar mascot.



 Forwarded Message 
Subject:Zaqar logo files
Date:   Mon, 13 Feb 2017 15:26:12 -0800
From:   Heidi Joy Tretheway 
To: Fei Long Wang 



Hi Fei Long, 

I’m excited to finally be able to share final project logo files with
you. Inside this folder, you’ll find full-color and one-color versions
of the logo, plus a mascot-only version, in EPS, JPG and PNG formats.
You can use them on presentations and wherever else you’d like to add
some team flair. 

https://www.dropbox.com/sh/6z7lw9f09yfkvtg/AAB5rTSchDMJae0sltTuf2zra?dl=0 

At the PWG, we’ll have stickers for your team of the mascot, plus
signage on your room. I’m especially excited for the project teams to
see all of the logos together as one group, because they work
beautifully together stylistically while making each project’s mark
distinctive. Feel free to share this with your team, and thanks to you
and to them for the hard work they put into reaching an agreement on the
mascot. Also feel free to direct any questions my way!


photo   

*Heidi Joy Tretheway*
Senior Marketing Manager, OpenStack Foundation
503 816 9769  | Skype: heidi.tretheway

  





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][glance][glare][all] glance/glare/artifacts/images at the PTG

2017-02-13 Thread Clint Byrum
Excerpts from Ian Cordasco's message of 2017-02-13 15:15:12 -0500:
> -Original Message-
> From: Clint Byrum 
> Reply: OpenStack Development Mailing List (not for usage questions)
> 
> Date: February 13, 2017 at 13:41:00
> To: openstack-dev 
> Subject:  Re: [openstack-dev] [tc][glance][glare][all]
> glance/glare/artifacts/images at the PTG
> 
> > Excerpts from Mikhail Fedosin's message of 2017-02-13 18:23:19 +0300:
> > > Hello!
> > >
> > >
> > > Let me quickly describe my vision of the problem. I asked this question to
> > > Brian last Friday, because it is evident that the projects have the
> > > intersection in functionality. For this reason, I proposed to bring Glare
> > > back and develop it as a new generation of Glance service. Perhaps such a
> > > solution would be more correct from my point of view.
> > >
> > > Moving away from Glance, let me remind you why we created Glare service.
> > >
> > > Almost every project works with some binary data and must store it
> > > somewhere, and almost always storage itself is not the part of the
> > > project's mission. This issue has often been neglected. For this reason
> > > there is no single recommended method for storing of binary data, which
> > > would have a unified public api and hide all the things of the internal
> > > storage infrastructure.
> > >
> >
> > We have an awesome service for storing binary data in a hierarchical
> > format in Swift. But it's so generic, it can't really just be the image
> > service. But something like Glare is just a way to scope it down and
> > give us a way to ask for "just the images" or "just the heat templates",
> > which I think is a natural thing for cloud users to want.
> >
> > > These questions were answered by Glare. First of all, the service allows 
> > > to
> > > use different storages for various types of artifacts - an operator can
> > > assign the storage of large files, such as virtual machine images, to
> > > Swift, and for relatively small ones, such as Heat templates, use a mysql
> > > database.
> > >
> >
> > Meh. Swift isn't exactly slow, or cumbersome for small files.
> >
> > > Then, we have to admit that data tends to change, so we added a versioning
> > > of artifacts and dependencies between them, that the user was convenient 
> > > to
> > > take the data of required version.
> > >
> >
> > Any attempt at versioning that is not git, will frustrate any git user.
> > This cat's already out of the bag, but I'd suggest adding git repositories
> > as a blob container type and finding a way to allow git to push/pull
> > to/from swift. That would be an amazing feature _for swift_ anyway
> > (maybe it already exists?) but it would allow Glare to piggy back on all
> > of the collective versioning capabilities in Git rather than having to
> > chase git.
> 
> So the versioning that's present will frustrate everyone. The
> reasoning for it is that the original Glare developers found a hack
> online to convert the version string into something that a database
> can sort (by turning it into one giant integer basically). (I'm
> certain that's not the only reason, but when challenged with several
> other options they said they couldn't find anyone who had already
> found a way to make it sortable on the version.)
> 
> That aside, I'm not sure anyone wants git (even git-lfs) managing 50GB
> images for them.
> 

Sounds like this was a high level solution that I don't fully understand,
so I'll stop bikeshedding it. But generally I'd say for most OpenStack
services you want to stay low-level whenever possible.

And of course the actual binaries would not be in git. But the metadata
about them would be, which would allow things like bisection,
annotation, etc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [monasca] Ideas to work on

2017-02-13 Thread Hochmuth, Roland M
Hi Anqi, See my comments listed below. Regards --Roland

From: An Qi YL Lu >
Date: Sunday, February 12, 2017 at 8:29 PM
To: Roland Hochmuth >
Cc: OpenStack List 
>
Subject: Re: [monasca] Ideas to work on

Hi Roland

I am not sure whether you received my last email because I got a delivery 
failure notification. I am sending this again to ensure that you can see this 
email.

Best,
Anqi

- Original message -
From: An Qi YL Lu/China/IBM
To: roland.hochm...@hpe.com
Cc: openstack-dev@lists.openstack.org
Subject: Re: [monasca] Ideas to work on
Date: Fri, Feb 10, 2017 5:14 PM

Hi Roland

Thanks for your suggestions. The list you made is useful, helping me get clues 
in areas that I can work on. I spent some time doing investigation in the bps 
that you introduced.

I am most interested in data retention and metrics deleting.

Data retention: I had a quick look into the data retention policy of influxDB. 
It apparently support different retention policy for different series. To my 
understanding, the whiteboard in this bp has a straightforward design for this 
feature. I didn't quite get what is the complex point. Could you please shed 
some light so I can learn where the complicated part is?
The retention policy specified in the bp, 
https://blueprints.launchpad.net/monasca/+spec/per-project-data-retention,  is 
per project. InfluxDB allows retention policies to be set per database, 
https://docs.influxdata.com/influxdb/v1.2/query_language/database_management/#create-retention-policies-with-create-retention-policy.

Currently, we store all metrics for all tenants in one database. One approach, 
which would involve a bit of re-engineering if we choose to do it, would be to 
store metrics for a project in a database for each project.

I could also imagine having retention policies per metric per tenant. For 
example, there might be metrics for metering that should be stored for a longer 
period than operational metrics. There isn't a way to do this directly in 
InfluxDB using the built-in data retention policy. However, it could possibly 
be done using delete and scheduling jobs that periodically run that prune the 
database.

For the Vertica database, we, as in HPE, simulate retention policies by running 
a cron job that drops partitions after some period of time, such as 45 days. 
Charter has a more sophisticated cron job that deletes metrics from specific 
tenants at different periods than the operational metrics. For example, tenants 
of the cloud might have their metrics deleted every two weeks. Metering metrics 
might be deleted every 13 months.

The problem with deleting specific metrics is the performance. Dropping 
partitions is extremely fast. However, deleting metrics might be slow and also 
lock the database and prevent writes and/or queries to it. Therefore, to delete 
metrics, you could trickle deletes in, reducing the overall impact for any 
period of time, or do in the Charter case, run the deletion script at 2:00 AM 
in the morning, when usage of the system is light.

Metrics deleting: In influxDB 1.1 (or any version after 0.9), it supports 
deleting series, though you cannot specify time interval for this operation. It 
simply deletes all points from a series in a database. I think one of the 
tricky parts is to decide the data dependent on a metric to be deleted, such as 
measurements, alarms. Please point it out if my understanding is not precise.
The problem I believe is that a single series in InfluxDB has the data for 
multiple tenants. Deleting a single series would then result in deleting series 
for all tenants. Similar to data retention policies, to support deletion of 
metrics, by metric name and optional dimensions, the storage of metrics would 
need to be handled differently and/or some other solution designed.


I would like to look at logs publishing as well. But unfortunately I did not 
find the monasca-log-api doc, which is supposed to be at 
https://github.com/openstack/monasca-log-api/tree/master/docs . I don't know 
how this log-api works now. Please share me a copy of the doc if you have one.
The new changes proposed by Steve Simpson are in the review that he just 
published at, https://review.openstack.org/#/c/433016/.

The current documentation is now under a slightly different directory than the 
link above at, 
https://github.com/openstack/monasca-log-api/blob/master/documentation/monasca-log-api-spec.md.

Best,
Anqi

- Original message -
From: "Hochmuth, Roland M" 
>
To: OpenStack List 
>, 
An Qi YL Lu/China/IBM@IBMCN
Cc:
Subject: [monasca] Ideas to work on
Date: Fri, Feb 10, 2017 

[openstack-dev] [Horizon] Final project mascot

2017-02-13 Thread Richard Jones
Hi folks,

Here's the final mascot/logo from the Foundation.


Richard

-- Forwarded message --
From: Heidi Joy Tretheway 
Date: 14 February 2017 at 08:49
Subject: Horizon project mascot
To: Rob Cresswell , Richard Jones <
r1chardj0...@gmail.com>


Hi Richard and Rob,

I’m excited to finally be able to share final project logo files with you.
Inside this folder, you’ll find full-color and one-color versions of the
logo, plus a mascot-only version, in EPS, JPG and PNG formats. You can use
them on presentations and wherever else you’d like to add some team flair.

https://www.dropbox.com/sh/vyig9h2ko7onkcr/AACRX7ChY3lA0cNYs1OhCGZja?dl=0

At the PWG, we’ll have stickers for your team of the mascot, plus signage
on your room. I’m especially excited for the project teams to see all of
the logos together as one group, because they work beautifully together
stylistically while making each project’s mark distinctive. Feel free to
share this with your team, and thanks to you and to them for the hard work
they put into reaching an agreement on the mascot. Also feel free to direct
any questions my way!


[image: photo]
*Heidi Joy Tretheway*
Senior Marketing Manager, OpenStack Foundation
503 816 9769 | Skype: heidi.tretheway

  
  
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Moving coverage jobs to check queue

2017-02-13 Thread Ian Wienand
Hello,

In a prior thread we discussed moving coverage jobs from "post" jobs
to the check queue [1].

Firstly, if you have no idea what I'm talking about, the coverage jobs
run the unit tests under the coverage tool [2] and produce some output
like [3] which identifies which parts of the code have been executed.

It seems the history of these jobs in "post" was that they took a
little too long in the check queue with added measurement overhead.
Looking at, for example, nova jobs, the coverage job is currently in
the 10 minute range.  I'm not sure if the coverage tool has got better
or nova reduced the job time; both are likely.  This has led to much
inconsistency as to where we run the coverage jobs -- it mostly looks
like where they run depends on which other project you used as a
template.  Some projects run in both check and post which seems
unnecessary.

The coverage job results in post are quite hard to find.  You need to
firstly know the job even runs, then find the correct commit sha
(which is probably the merge commit, not the one in gerrit) and then
know how to manually navigate the logs.openstack.org file-hierarchy.
It's probably no surprise that according to apache logs, nobody has
accessed a post coverage-job output at all within about the last
month.  Also, as per the prior email, if the job is actually failing
you get no notification at all.

A recent change has made "-nv" (non-voting) coverage jobs available
[4] which simplifies this somewhat, as there is no need to put special
regexes in to stop voting.

I have proposed [5] which moves all coverage post jobs to non-voting
check queue jobs.  It also renames them with our standard "gate-" for
consistency.  I believe this will improve usage of the tool and also
clean-up our layout.yaml a bit.

Feel free to raise concerns in [5]

Thanks,

-i

[1] http://lists.openstack.org/pipermail/openstack-dev/2016-July/099491.html
[2] https://coverage.readthedocs.io/en/coverage-4.3.4/
[3] 
http://logs.openstack.org/c3/c3671ee7da154e251c2915d4aced2b1a2bd8dfa9/post/nova-coverage-db-ubuntu-xenial/bc21639/cover/
[4] https://review.openstack.org/431783
[5] https://review.openstack.org/432836

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] [all] Pike PTG QA Input / Feedback Session

2017-02-13 Thread Andrea Frittoli
Hi folks,

at the PTG in Atlanta we will schedule a session [0] to collect and discuss
feedback and
input from the community on existing QA projects.
We will use the resulting material in a later session to set priorities of
the QA team for Pike.
Note that for Tempest plugins specifically there will be another dedicated
session [1].

Priorities of the team are not written in stone, but I would like to be
able to set foot in the right direction
from the beginning of the cycle, and input in the etherpad before the PTG
would be very beneficial for the QA team.
Please accompany your input with your name / IRC nick.

If you plan to attend the session and/or would like your input to be
discussed please make a note on the etherpad.

Thank you!

Andrea

IRC: andreaf

[0] https://etherpad.openstack.org/p/qa-ptg-pike-community-input
[1] https://etherpad.openstack.org/p/qa-ptg-pike-tempest-plugins
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Quality Assurance logo files

2017-02-13 Thread Andrea Frittoli
-- Forwarded message -
From: Heidi Joy Tretheway 
Date: Mon, Feb 13, 2017 at 10:09 PM
Subject: Quality Assurance logo files
To: , 


Hi Ken’ichi and Andrea,

I’m excited to finally be able to share final project logo files with you.
Inside this folder, you’ll find full-color and one-color versions of the
logo, plus a mascot-only version, in EPS, JPG and PNG formats. You can use
them on presentations and wherever else you’d like to add some team flair.

https://www.dropbox.com/sh/bxb0zmqbcnzov20/AAAXEYaYORHe5hXXiPa09Tisa?dl=0

At the PWG, we’ll have stickers for your team of the mascot, plus signage
on your room. I’m especially excited for the project teams to see all of
the logos together as one group, because they work beautifully together
stylistically while making each project’s mark distinctive. Feel free to
share this with your team, and thanks to you and to them for the hard work
they put into reaching an agreement on the mascot. Also feel free to direct
any questions my way!


[image: photo]
*Heidi Joy Tretheway*
Senior Marketing Manager, OpenStack Foundation
503 816 9769 | Skype: heidi.tretheway

  
  
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [stadium] subprojects on independent release cycle

2017-02-13 Thread Takashi Yamamoto
On Thu, Feb 9, 2017 at 7:47 AM, Takashi Yamamoto  wrote:
> i plan to cut a release for networking-midonet once relevant projects get 
> ready.
> ie. neutron-vpnaas, tap-as-a-service

https://review.openstack.org/#/c/429437/

>
> On Thu, Feb 9, 2017 at 1:16 AM, Armando M.  wrote:
>>
>>
>> On 2 February 2017 at 16:09, Armando M.  wrote:
>>>
>>> Hi neutrinos,
>>>
>>> I have put a number of patches in the merge queue for a few sub-projects.
>>> We currently have a number of these that are on an independent release
>>> schedule. In particular:
>>>
>>> networking-bagpipe
>>> networking-bgpvpn
>>> networking-midonet
>>> networking-odl
>>> networking-sfc
>>>
>>> Please make sure that between now and March 10th [1], you work to prepare
>>> at least one ocata release that works with neutron's [2] and cut a stable
>>> branch before than. That would incredibly help consumers who are interested
>>> in assembling these bits together and start testing ocata as soon as it's
>>> out.
>>>
>>> Your collaboration is much appreciated.
>>>
>>> Many thanks,
>>> Armando
>>
>>
>> Hi neutrinos,
>>
>> I did not hear anything back from the liaisons of the above mentioned
>> project over the past few days. Can you clarify your plans for cutting an
>> Ocata release?
>>
>> Thanks,
>> Armando
>>
>>>
>>> [1] https://releases.openstack.org/ocata/schedule.html
>>> [2] https://review.openstack.org/#/c/428474/
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Pike PTG Schedule

2017-02-13 Thread Andrea Frittoli
Hello team,

I did some work on the etherpad [0] to organise the QA PTG schedule based
on the proposed sessions.
I separated discussion sessions from hands-on type of actives. I thing the
former will require a fixed schedule so folks may know when to pop-in to
attend; the latter may happen more ad-hoc based on who's there an not busy
in a session. We can have a final discussion on this in the QA meeting on
Thursday [1].

I haven't fixed the exact schedule yet, I will do it soon; meanwhile please
let me know if you think your session will need more or less then 30' which
is the standard time I would allocate.
Discussions that overrun the planned time can go in a parking lot, which we
can pick up later at the PTG or during meetings.
Any session preparation work / existing material, previous discussions,
links and so can go in the respective etherpad.

I think we have a good set of topics to discuss and code to write, enough
to keep us busy for more than the time we have.
Nonetheless, if you have more topic to propose please continue adding them
to the etherpad.

Thank you and regards,

Andrea

[0] https://etherpad.openstack.org/p/qa-ptg-pike
[1]
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_February_16th_2017_.281700_UTC.29
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [vpnaas] vpnaas no longer part of the neutron governance

2017-02-13 Thread Takashi Yamamoto
On Tue, Feb 14, 2017 at 8:15 AM, Doug Hellmann  wrote:
> Excerpts from Takashi Yamamoto's message of 2017-02-14 07:31:34 +0900:
>> hi,
>>
>> On Tue, Feb 14, 2017 at 1:54 AM, Doug Hellmann  wrote:
>> > Excerpts from Takashi Yamamoto's message of 2017-02-06 10:32:10 +0900:
>> >> On Wed, Feb 1, 2017 at 9:46 AM, Takashi Yamamoto  
>> >> wrote:
>> >> > hi,
>> >> >
>> >> > On Fri, Jan 27, 2017 at 7:46 AM, Doug Hellmann  
>> >> > wrote:
>> >> >> Excerpts from Takashi Yamamoto's message of 2017-01-26 11:42:48 +0900:
>> >> >>> hi,
>> >> >>>
>> >> >>> On Sat, Jan 14, 2017 at 2:17 AM, Doug Hellmann 
>> >> >>>  wrote:
>> >> >>> > Excerpts from Dariusz Śmigiel's message of 2017-01-13 09:11:01 
>> >> >>> > -0600:
>> >> >>> >> 2017-01-12 21:43 GMT-06:00 Takashi Yamamoto 
>> >> >>> >> :
>> >> >>> >> > hi,
>> >> >>> >> >
>> >> >>> >> > On Wed, Nov 16, 2016 at 11:02 AM, Armando M.  
>> >> >>> >> > wrote:
>> >> >>> >> >> Hi
>> >> >>> >> >>
>> >> >>> >> >> As of today, the project neutron-vpnaas is no longer part of 
>> >> >>> >> >> the neutron
>> >> >>> >> >> governance. This was a decision reached after the project saw a 
>> >> >>> >> >> dramatic
>> >> >>> >> >> drop in active development over a prolonged period of time.
>> >> >>> >> >>
>> >> >>> >> >> What does this mean in practice?
>> >> >>> >> >>
>> >> >>> >> >> From a visibility point of view, release notes and 
>> >> >>> >> >> documentation will no
>> >> >>> >> >> longer appear on openstack.org as of Ocata going forward.
>> >> >>> >> >> No more releases will be published by the neutron release team.
>> >> >>> >> >> The neutron team will stop proposing fixes for the upstream CI, 
>> >> >>> >> >> if not
>> >> >>> >> >> solely on a voluntary basis (e.g. I still felt like proposing 
>> >> >>> >> >> [2]).
>> >> >>> >> >>
>> >> >>> >> >> How does it affect you, the user or the deployer?
>> >> >>> >> >>
>> >> >>> >> >> You can continue to use vpnaas and its CLI via the 
>> >> >>> >> >> python-neutronclient and
>> >> >>> >> >> expect it to work with neutron up until the newton
>> >> >>> >> >> release/python-neutronclient 6.0.0. After this point, if you 
>> >> >>> >> >> want a release
>> >> >>> >> >> that works for Ocata or newer, you need to proactively request 
>> >> >>> >> >> a release
>> >> >>> >> >> [5], and reach out to a member of the neutron release team [3] 
>> >> >>> >> >> for approval.
>> >> >>> >> >
>> >> >>> >> > i want to make an ocata release. (and more importantly the 
>> >> >>> >> > stable branch,
>> >> >>> >> > for the benefit of consuming subprojects)
>> >> >>> >> > for the purpose, the next step would be ocata-3, right?
>> >> >>> >>
>> >> >>> >> Hey Takashi,
>> >> >>> >> If you want to release new version of neutron-vpnaas, please look 
>> >> >>> >> at [1].
>> >> >>> >> This is the place, which you need to update and based on provided
>> >> >>> >> details, tags and branches will be cut.
>> >> >>> >>
>> >> >>> >> [1] 
>> >> >>> >> https://github.com/openstack/releases/blob/master/deliverables/ocata/neutron-vpnaas.yaml
>> >> >>> >
>> >> >>> > Unfortunately, since vpnaas is no longer part of an official 
>> >> >>> > project,
>> >> >>> > we won't be using the releases repository to manage and publish
>> >> >>> > information about the releases. It'll need to be done by hand.
>> >> >>>
>> >> >>> who can/should do it by hand?
>> >> >>
>> >> >> I can do it. Let me know the version number, and for each repository 
>> >> >> the
>> >> >> SHA of the commit on the master branch to be tagged.
>> >>
>> >> please make it with the following.  thank you!
>> >>
>> >> stable/ocata
>> >> 10.0.0
>> >> openstack/neutron-vpnaas
>> >> d6db1238a4950df03dfb28acabcf4df14ebfa3ac
>> >
>> > Sorry, I missed this email earlier.
>>
>> no problem!
>>
>> >
>> > Do you want 10.0.0 or 10.0.0.0rc1?
>>
>> 10.0.0.
>
> OK, the tag is in place and the branch is created.

thank you!

>
> Doug
>
>>
>> >
>> > Doug
>> >
>> >>
>> >> >
>> >> > thank you. i'll ask you when necessary.
>> >> >
>> >> > i think it's fine to just make a branch from master when stable branch 
>> >> > is cut
>> >> > for neutron.  how others think?
>> >> >
>> >> >>
>> >> >> Doug
>> >> >>
>> >> >>>
>> >> >>> >
>> >> >>> > Doug
>> >> >>> >
>> >> >>> >>
>> >> >>> >> BR, Dariusz
>> >> >>> >>
>> >> >>> >
>> >> >>> > __
>> >> >>> > OpenStack Development Mailing List (not for usage questions)
>> >> >>> > Unsubscribe: 
>> >> >>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >> >>>
>> >> >>
>> >> >> __
>> >> >> OpenStack Development Mailing List (not for usage questions)
>> >> >> Unsubscribe: 
>> >> >> 

[openstack-dev] Fwd: Docs project mascot

2017-02-13 Thread Lana Brindley

Hi everyone,

Here's the final docs logo from Foundation.


Lana

 Forwarded Message 
Subject:Docs project mascot
Date:   Mon, 13 Feb 2017 13:38:46 -0800
From:   Heidi Joy Tretheway 
To: Alexandra Settle , Lana Brindley 




Hi Lana and Alexandra, 

I’m excited to finally be able to share final project logo files with you. 
Inside this folder, you’ll find full-color and one-color versions of the logo, 
plus a mascot-only version, in EPS, JPG and PNG formats. You can use them on 
presentations and wherever else you’d like to add some team flair. 

https://www.dropbox.com/sh/htu234yuf963i9b/AAAsraXwT3a5O9HNmms4E9yFa?dl=0

At the PWG, we’ll have stickers for your team of the mascot, plus signage on 
your room. I’m especially excited for the project teams to see all of the logos 
together as one group, because they work beautifully together stylistically 
while making each project’s mark distinctive. Feel free to share this with your 
team, and thanks to you and to them for the hard work they put into reaching an 
agreement on the mascot. Also feel free to direct any questions my way!


photo   
*Heidi Joy Tretheway*
Senior Marketing Manager, OpenStack Foundation
503 816 9769  | Skype: heidi.tretheway 

  








signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-sfc] What resources can and can't be reused

2017-02-13 Thread Cathy Zhang
Hi Igor,

The list of rules are correct.

Best regards,
Cathy

From: Duarte Cardoso, Igor [mailto:igor.duarte.card...@intel.com]
Sent: Monday, February 13, 2017 1:27 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [networking-sfc] What resources can and can't be 
reused

Hi Louis,

Yes, that makes sense - thanks for the feedback and the responses on my points.

Best regards,
Igor.

From: Henry Fourie [mailto:louis.fou...@huawei.com]
Sent: Monday, February 13, 2017 9:15 PM
To: OpenStack Development Mailing List (not for usage questions) 
>
Subject: Re: [openstack-dev] [networking-sfc] What resources can and can't be 
reused

Igor,
   For #6, the requirement on source-port for a flow-classifier is only for the 
OVS driver. This is not a restriction for other backend drivers.
In the case where there is no need for a sfc proxy to re-classify traffic 
returned from the egress port of a SF,
i.e., the SF is NSH-aware and it can receive, process and return the NSH, this 
restriction does not apply.
- Louis

From: Duarte Cardoso, Igor [mailto:igor.duarte.card...@intel.com]
Sent: Monday, February 13, 2017 12:27 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [networking-sfc] What resources can and can't be 
reused

Hi Cathy,

Relax only a couple of them. For Ocata I'm looking at disabling #6 if the 
chain/graph doesn't include sfc proxies (#6 seems to only be necessary if there 
are sfc proxies [1]). For Pike it would be interesting to make port-pair-groups 
completely reusable, as long as the flow classifiers don't make the choice of 
chain ambiguous.

[1] 
http://eavesdrop.openstack.org/meetings/service_chaining/2017/service_chaining.2017-01-12-17.14.log.html

Best regards,
Igor.

From: Cathy Zhang [mailto:cathy.h.zh...@huawei.com]
Sent: Monday, February 13, 2017 7:50 PM
To: OpenStack Development Mailing List (not for usage questions) 
>
Subject: Re: [openstack-dev] [networking-sfc] What resources can and can't be 
reused

Hi Igor,

Before we dive into evaluation of the rules you listed below, I would like to 
understand whether you are suggesting to enforce the rules or relax the  
rules/constraints you listed?
Could you clarify it?

Thanks,
Cathy

From: Duarte Cardoso, Igor [mailto:igor.duarte.card...@intel.com]
Sent: Monday, February 13, 2017 11:12 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [networking-sfc] What resources can and can't be reused

Hi networking-sfc,

As part of my work regarding SFC Encapsulation and SFC Graphs, I exercised the 
API to understand exactly what resources can be reused, to possibly relax a few 
of the constraints when a chain is encapsulated end-to-end.
I'm requesting that the leaders and cores take a look at the list below, and 
reply if you see something that doesn't look quite right (or have any other 
comment/question). Thanks!

1. Every flow-classifier must have a logical source port.
2. The flow-classifier must be unique in its (full) definition.
3. A port-chain can have multiple flow-classifiers associated with exactly the 
same definition BUT different logical source ports.
4. The port-chains can be ambiguous, i.e. match on the same classification 
criteria, if and only if there are 0 flow classifiers associated.
5. The flow classifiers can only be used once, by a single port-chain.
6. Different port-chains cannot be associated to different flow classifiers 
that specify the same classification criteria BUT different logical source 
ports (this is https://bugs.launchpad.net/networking-sfc/+bug/1638421).
7. A port-pair's ingress cannot be in use by another port-pair's ingress.
8. A port-pair's egress cannot be in use by another port-pair's egress.
9. A port-pair can be associated to another port-pair's ingress and egress 
ports BUT swapped (i1=e2, e1=i2).
10. The port-pairs become "in use" when a port-pair-group associates them, so 
they can't be reused across port-pair-groups.
11. A port-chain can include port-pair-groups already associated to other 
port-chains, as long as not the exact same sequence as another port-chain (e.g. 
pc1: [ppg1,ppg2]; pc2: [ppg1] - is fine).

Best regards,
Igor.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [vpnaas] vpnaas no longer part of the neutron governance

2017-02-13 Thread Doug Hellmann
Excerpts from Takashi Yamamoto's message of 2017-02-14 07:31:34 +0900:
> hi,
> 
> On Tue, Feb 14, 2017 at 1:54 AM, Doug Hellmann  wrote:
> > Excerpts from Takashi Yamamoto's message of 2017-02-06 10:32:10 +0900:
> >> On Wed, Feb 1, 2017 at 9:46 AM, Takashi Yamamoto  
> >> wrote:
> >> > hi,
> >> >
> >> > On Fri, Jan 27, 2017 at 7:46 AM, Doug Hellmann  
> >> > wrote:
> >> >> Excerpts from Takashi Yamamoto's message of 2017-01-26 11:42:48 +0900:
> >> >>> hi,
> >> >>>
> >> >>> On Sat, Jan 14, 2017 at 2:17 AM, Doug Hellmann  
> >> >>> wrote:
> >> >>> > Excerpts from Dariusz Śmigiel's message of 2017-01-13 09:11:01 -0600:
> >> >>> >> 2017-01-12 21:43 GMT-06:00 Takashi Yamamoto :
> >> >>> >> > hi,
> >> >>> >> >
> >> >>> >> > On Wed, Nov 16, 2016 at 11:02 AM, Armando M.  
> >> >>> >> > wrote:
> >> >>> >> >> Hi
> >> >>> >> >>
> >> >>> >> >> As of today, the project neutron-vpnaas is no longer part of the 
> >> >>> >> >> neutron
> >> >>> >> >> governance. This was a decision reached after the project saw a 
> >> >>> >> >> dramatic
> >> >>> >> >> drop in active development over a prolonged period of time.
> >> >>> >> >>
> >> >>> >> >> What does this mean in practice?
> >> >>> >> >>
> >> >>> >> >> From a visibility point of view, release notes and documentation 
> >> >>> >> >> will no
> >> >>> >> >> longer appear on openstack.org as of Ocata going forward.
> >> >>> >> >> No more releases will be published by the neutron release team.
> >> >>> >> >> The neutron team will stop proposing fixes for the upstream CI, 
> >> >>> >> >> if not
> >> >>> >> >> solely on a voluntary basis (e.g. I still felt like proposing 
> >> >>> >> >> [2]).
> >> >>> >> >>
> >> >>> >> >> How does it affect you, the user or the deployer?
> >> >>> >> >>
> >> >>> >> >> You can continue to use vpnaas and its CLI via the 
> >> >>> >> >> python-neutronclient and
> >> >>> >> >> expect it to work with neutron up until the newton
> >> >>> >> >> release/python-neutronclient 6.0.0. After this point, if you 
> >> >>> >> >> want a release
> >> >>> >> >> that works for Ocata or newer, you need to proactively request a 
> >> >>> >> >> release
> >> >>> >> >> [5], and reach out to a member of the neutron release team [3] 
> >> >>> >> >> for approval.
> >> >>> >> >
> >> >>> >> > i want to make an ocata release. (and more importantly the stable 
> >> >>> >> > branch,
> >> >>> >> > for the benefit of consuming subprojects)
> >> >>> >> > for the purpose, the next step would be ocata-3, right?
> >> >>> >>
> >> >>> >> Hey Takashi,
> >> >>> >> If you want to release new version of neutron-vpnaas, please look 
> >> >>> >> at [1].
> >> >>> >> This is the place, which you need to update and based on provided
> >> >>> >> details, tags and branches will be cut.
> >> >>> >>
> >> >>> >> [1] 
> >> >>> >> https://github.com/openstack/releases/blob/master/deliverables/ocata/neutron-vpnaas.yaml
> >> >>> >
> >> >>> > Unfortunately, since vpnaas is no longer part of an official project,
> >> >>> > we won't be using the releases repository to manage and publish
> >> >>> > information about the releases. It'll need to be done by hand.
> >> >>>
> >> >>> who can/should do it by hand?
> >> >>
> >> >> I can do it. Let me know the version number, and for each repository the
> >> >> SHA of the commit on the master branch to be tagged.
> >>
> >> please make it with the following.  thank you!
> >>
> >> stable/ocata
> >> 10.0.0
> >> openstack/neutron-vpnaas
> >> d6db1238a4950df03dfb28acabcf4df14ebfa3ac
> >
> > Sorry, I missed this email earlier.
> 
> no problem!
> 
> >
> > Do you want 10.0.0 or 10.0.0.0rc1?
> 
> 10.0.0.

OK, the tag is in place and the branch is created.

Doug

> 
> >
> > Doug
> >
> >>
> >> >
> >> > thank you. i'll ask you when necessary.
> >> >
> >> > i think it's fine to just make a branch from master when stable branch 
> >> > is cut
> >> > for neutron.  how others think?
> >> >
> >> >>
> >> >> Doug
> >> >>
> >> >>>
> >> >>> >
> >> >>> > Doug
> >> >>> >
> >> >>> >>
> >> >>> >> BR, Dariusz
> >> >>> >>
> >> >>> >
> >> >>> > __
> >> >>> > OpenStack Development Mailing List (not for usage questions)
> >> >>> > Unsubscribe: 
> >> >>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >>>
> >> >>
> >> >> __
> >> >> OpenStack Development Mailing List (not for usage questions)
> >> >> Unsubscribe: 
> >> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: 

[openstack-dev] [keystone] Pike PTG scheduling

2017-02-13 Thread Lance Bragstad
Hey folks,


We've had an etherpad [0] floating for the last few weeks collecting ideas
for PTG sessions. I spent today finalizing several of the existing topics
and porting others from various sources. While I think this is a pretty
exhaustive list, I'm leaving it open for any last minute suggestions.

At the bottom I've laid out a very basic schedule. I'm trying to group
larger discussions into available time slots, as well as coordinate
cross-project sessions.

Feel free to have a look. If you see anything that conflicts with something
we should be interested in, please let me know and I'll do some
re-shuffling. If you would like to add another topic, do so in the original
list. I'm crossing them off as I find time slots for them in the agenda
below (only so that I don't forget to schedule a topic).

Thanks!


[0] https://etherpad.openstack.org/p/keystone-pike-ptg
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][glance][glare][all] glance/glare/artifacts/images at the PTG

2017-02-13 Thread Thomas Herve
On Mon, Feb 13, 2017 at 8:39 PM, Clint Byrum  wrote:

[snip]

> Any attempt at versioning that is not git, will frustrate any git user.
> This cat's already out of the bag, but I'd suggest adding git repositories
> as a blob container type and finding a way to allow git to push/pull
> to/from swift. That would be an amazing feature _for swift_ anyway
> (maybe it already exists?) but it would allow Glare to piggy back on all
> of the collective versioning capabilities in Git rather than having to
> chase git.

That has been done:
https://blogs.rdoproject.org/6642/openstack-swift-as-backend-for-git-part-1

I don't know if it's maintained nowadays, but I suspect it could be
picked up if interest raises.

-- 
Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Pike PTG Planning

2017-02-13 Thread Matt Riedemann
I'm working on organizing the nova-ptg-pike etherpad [1] which involves 
trying to come up with a rough schedule to cover topics that might 
impact other projects.


I'm starting by simply grouping related topics (placement, cells, 
quotas, etc) and then figure out what times work best for discussing 
those since other teams might be involved, like Ironic will want to be 
around for some of the placement discussion because of resource classes.


For the rest of the smaller items I'm going to just group those into a 
miscellaneous section in the etherpad.


What I'd like to see is that if you have one of these miscellaneous 
items and are not going to be around for the full Wednesday, Thursday, 
Friday, then please make a note of that by your item and I'll sort the 
topics appropriately. I think we'll cover the miscellaneous items when 
we have time between bigger scheduled topics, and then whatever is 
leftover will be discussed on Friday. I know some people won't be around 
on Friday, or are leaving early, so if that's the case please make a 
note of that in the etherpad.


[1] https://etherpad.openstack.org/p/nova-ptg-pike

--

Thanks,

Matt Riedemann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [vpnaas] vpnaas no longer part of the neutron governance

2017-02-13 Thread Takashi Yamamoto
hi,

On Tue, Feb 14, 2017 at 1:54 AM, Doug Hellmann  wrote:
> Excerpts from Takashi Yamamoto's message of 2017-02-06 10:32:10 +0900:
>> On Wed, Feb 1, 2017 at 9:46 AM, Takashi Yamamoto  
>> wrote:
>> > hi,
>> >
>> > On Fri, Jan 27, 2017 at 7:46 AM, Doug Hellmann  
>> > wrote:
>> >> Excerpts from Takashi Yamamoto's message of 2017-01-26 11:42:48 +0900:
>> >>> hi,
>> >>>
>> >>> On Sat, Jan 14, 2017 at 2:17 AM, Doug Hellmann  
>> >>> wrote:
>> >>> > Excerpts from Dariusz Śmigiel's message of 2017-01-13 09:11:01 -0600:
>> >>> >> 2017-01-12 21:43 GMT-06:00 Takashi Yamamoto :
>> >>> >> > hi,
>> >>> >> >
>> >>> >> > On Wed, Nov 16, 2016 at 11:02 AM, Armando M.  
>> >>> >> > wrote:
>> >>> >> >> Hi
>> >>> >> >>
>> >>> >> >> As of today, the project neutron-vpnaas is no longer part of the 
>> >>> >> >> neutron
>> >>> >> >> governance. This was a decision reached after the project saw a 
>> >>> >> >> dramatic
>> >>> >> >> drop in active development over a prolonged period of time.
>> >>> >> >>
>> >>> >> >> What does this mean in practice?
>> >>> >> >>
>> >>> >> >> From a visibility point of view, release notes and documentation 
>> >>> >> >> will no
>> >>> >> >> longer appear on openstack.org as of Ocata going forward.
>> >>> >> >> No more releases will be published by the neutron release team.
>> >>> >> >> The neutron team will stop proposing fixes for the upstream CI, if 
>> >>> >> >> not
>> >>> >> >> solely on a voluntary basis (e.g. I still felt like proposing [2]).
>> >>> >> >>
>> >>> >> >> How does it affect you, the user or the deployer?
>> >>> >> >>
>> >>> >> >> You can continue to use vpnaas and its CLI via the 
>> >>> >> >> python-neutronclient and
>> >>> >> >> expect it to work with neutron up until the newton
>> >>> >> >> release/python-neutronclient 6.0.0. After this point, if you want 
>> >>> >> >> a release
>> >>> >> >> that works for Ocata or newer, you need to proactively request a 
>> >>> >> >> release
>> >>> >> >> [5], and reach out to a member of the neutron release team [3] for 
>> >>> >> >> approval.
>> >>> >> >
>> >>> >> > i want to make an ocata release. (and more importantly the stable 
>> >>> >> > branch,
>> >>> >> > for the benefit of consuming subprojects)
>> >>> >> > for the purpose, the next step would be ocata-3, right?
>> >>> >>
>> >>> >> Hey Takashi,
>> >>> >> If you want to release new version of neutron-vpnaas, please look at 
>> >>> >> [1].
>> >>> >> This is the place, which you need to update and based on provided
>> >>> >> details, tags and branches will be cut.
>> >>> >>
>> >>> >> [1] 
>> >>> >> https://github.com/openstack/releases/blob/master/deliverables/ocata/neutron-vpnaas.yaml
>> >>> >
>> >>> > Unfortunately, since vpnaas is no longer part of an official project,
>> >>> > we won't be using the releases repository to manage and publish
>> >>> > information about the releases. It'll need to be done by hand.
>> >>>
>> >>> who can/should do it by hand?
>> >>
>> >> I can do it. Let me know the version number, and for each repository the
>> >> SHA of the commit on the master branch to be tagged.
>>
>> please make it with the following.  thank you!
>>
>> stable/ocata
>> 10.0.0
>> openstack/neutron-vpnaas
>> d6db1238a4950df03dfb28acabcf4df14ebfa3ac
>
> Sorry, I missed this email earlier.

no problem!

>
> Do you want 10.0.0 or 10.0.0.0rc1?

10.0.0.

>
> Doug
>
>>
>> >
>> > thank you. i'll ask you when necessary.
>> >
>> > i think it's fine to just make a branch from master when stable branch is 
>> > cut
>> > for neutron.  how others think?
>> >
>> >>
>> >> Doug
>> >>
>> >>>
>> >>> >
>> >>> > Doug
>> >>> >
>> >>> >>
>> >>> >> BR, Dariusz
>> >>> >>
>> >>> >
>> >>> > __
>> >>> > OpenStack Development Mailing List (not for usage questions)
>> >>> > Unsubscribe: 
>> >>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>>
>> >>
>> >> __
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-sfc] What resources can and can't be reused

2017-02-13 Thread Duarte Cardoso, Igor
Hi Louis,

Yes, that makes sense - thanks for the feedback and the responses on my points.

Best regards,
Igor.

From: Henry Fourie [mailto:louis.fou...@huawei.com]
Sent: Monday, February 13, 2017 9:15 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [networking-sfc] What resources can and can't be 
reused

Igor,
   For #6, the requirement on source-port for a flow-classifier is only for the 
OVS driver. This is not a restriction for other backend drivers.
In the case where there is no need for a sfc proxy to re-classify traffic 
returned from the egress port of a SF,
i.e., the SF is NSH-aware and it can receive, process and return the NSH, this 
restriction does not apply.
- Louis

From: Duarte Cardoso, Igor [mailto:igor.duarte.card...@intel.com]
Sent: Monday, February 13, 2017 12:27 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [networking-sfc] What resources can and can't be 
reused

Hi Cathy,

Relax only a couple of them. For Ocata I'm looking at disabling #6 if the 
chain/graph doesn't include sfc proxies (#6 seems to only be necessary if there 
are sfc proxies [1]). For Pike it would be interesting to make port-pair-groups 
completely reusable, as long as the flow classifiers don't make the choice of 
chain ambiguous.

[1] 
http://eavesdrop.openstack.org/meetings/service_chaining/2017/service_chaining.2017-01-12-17.14.log.html

Best regards,
Igor.

From: Cathy Zhang [mailto:cathy.h.zh...@huawei.com]
Sent: Monday, February 13, 2017 7:50 PM
To: OpenStack Development Mailing List (not for usage questions) 
>
Subject: Re: [openstack-dev] [networking-sfc] What resources can and can't be 
reused

Hi Igor,

Before we dive into evaluation of the rules you listed below, I would like to 
understand whether you are suggesting to enforce the rules or relax the  
rules/constraints you listed?
Could you clarify it?

Thanks,
Cathy

From: Duarte Cardoso, Igor [mailto:igor.duarte.card...@intel.com]
Sent: Monday, February 13, 2017 11:12 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [networking-sfc] What resources can and can't be reused

Hi networking-sfc,

As part of my work regarding SFC Encapsulation and SFC Graphs, I exercised the 
API to understand exactly what resources can be reused, to possibly relax a few 
of the constraints when a chain is encapsulated end-to-end.
I'm requesting that the leaders and cores take a look at the list below, and 
reply if you see something that doesn't look quite right (or have any other 
comment/question). Thanks!

1. Every flow-classifier must have a logical source port.
2. The flow-classifier must be unique in its (full) definition.
3. A port-chain can have multiple flow-classifiers associated with exactly the 
same definition BUT different logical source ports.
4. The port-chains can be ambiguous, i.e. match on the same classification 
criteria, if and only if there are 0 flow classifiers associated.
5. The flow classifiers can only be used once, by a single port-chain.
6. Different port-chains cannot be associated to different flow classifiers 
that specify the same classification criteria BUT different logical source 
ports (this is https://bugs.launchpad.net/networking-sfc/+bug/1638421).
7. A port-pair's ingress cannot be in use by another port-pair's ingress.
8. A port-pair's egress cannot be in use by another port-pair's egress.
9. A port-pair can be associated to another port-pair's ingress and egress 
ports BUT swapped (i1=e2, e1=i2).
10. The port-pairs become "in use" when a port-pair-group associates them, so 
they can't be reused across port-pair-groups.
11. A port-chain can include port-pair-groups already associated to other 
port-chains, as long as not the exact same sequence as another port-chain (e.g. 
pc1: [ppg1,ppg2]; pc2: [ppg1] - is fine).

Best regards,
Igor.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-sfc] What resources can and can't be reused

2017-02-13 Thread Henry Fourie
Igor,
   For #6, the requirement on source-port for a flow-classifier is only for the 
OVS driver. This is not a restriction for other backend drivers.
In the case where there is no need for a sfc proxy to re-classify traffic 
returned from the egress port of a SF,
i.e., the SF is NSH-aware and it can receive, process and return the NSH, this 
restriction does not apply.
- Louis

From: Duarte Cardoso, Igor [mailto:igor.duarte.card...@intel.com]
Sent: Monday, February 13, 2017 12:27 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [networking-sfc] What resources can and can't be 
reused

Hi Cathy,

Relax only a couple of them. For Ocata I'm looking at disabling #6 if the 
chain/graph doesn't include sfc proxies (#6 seems to only be necessary if there 
are sfc proxies [1]). For Pike it would be interesting to make port-pair-groups 
completely reusable, as long as the flow classifiers don't make the choice of 
chain ambiguous.

[1] 
http://eavesdrop.openstack.org/meetings/service_chaining/2017/service_chaining.2017-01-12-17.14.log.html

Best regards,
Igor.

From: Cathy Zhang [mailto:cathy.h.zh...@huawei.com]
Sent: Monday, February 13, 2017 7:50 PM
To: OpenStack Development Mailing List (not for usage questions) 
>
Subject: Re: [openstack-dev] [networking-sfc] What resources can and can't be 
reused

Hi Igor,

Before we dive into evaluation of the rules you listed below, I would like to 
understand whether you are suggesting to enforce the rules or relax the  
rules/constraints you listed?
Could you clarify it?

Thanks,
Cathy

From: Duarte Cardoso, Igor [mailto:igor.duarte.card...@intel.com]
Sent: Monday, February 13, 2017 11:12 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [networking-sfc] What resources can and can't be reused

Hi networking-sfc,

As part of my work regarding SFC Encapsulation and SFC Graphs, I exercised the 
API to understand exactly what resources can be reused, to possibly relax a few 
of the constraints when a chain is encapsulated end-to-end.
I'm requesting that the leaders and cores take a look at the list below, and 
reply if you see something that doesn't look quite right (or have any other 
comment/question). Thanks!

1. Every flow-classifier must have a logical source port.
2. The flow-classifier must be unique in its (full) definition.
3. A port-chain can have multiple flow-classifiers associated with exactly the 
same definition BUT different logical source ports.
4. The port-chains can be ambiguous, i.e. match on the same classification 
criteria, if and only if there are 0 flow classifiers associated.
5. The flow classifiers can only be used once, by a single port-chain.
6. Different port-chains cannot be associated to different flow classifiers 
that specify the same classification criteria BUT different logical source 
ports (this is https://bugs.launchpad.net/networking-sfc/+bug/1638421).
7. A port-pair's ingress cannot be in use by another port-pair's ingress.
8. A port-pair's egress cannot be in use by another port-pair's egress.
9. A port-pair can be associated to another port-pair's ingress and egress 
ports BUT swapped (i1=e2, e1=i2).
10. The port-pairs become "in use" when a port-pair-group associates them, so 
they can't be reused across port-pair-groups.
11. A port-chain can include port-pair-groups already associated to other 
port-chains, as long as not the exact same sequence as another port-chain (e.g. 
pc1: [ppg1,ppg2]; pc2: [ppg1] - is fine).

Best regards,
Igor.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-sfc] What resources can and can't be reused

2017-02-13 Thread Henry Fourie
Igor,
  See inline.

-Louis

From: Duarte Cardoso, Igor [mailto:igor.duarte.card...@intel.com]
Sent: Monday, February 13, 2017 11:12 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [networking-sfc] What resources can and can't be reused

Hi networking-sfc,

As part of my work regarding SFC Encapsulation and SFC Graphs, I exercised the 
API to understand exactly what resources can be reused, to possibly relax a few 
of the constraints when a chain is encapsulated end-to-end.
I'm requesting that the leaders and cores take a look at the list below, and 
reply if you see something that doesn't look quite right (or have any other 
comment/question). Thanks!


1.  Every flow-classifier must have a logical source port.
LF:  - this is only true for OVS drivers.

2.  The flow-classifier must be unique in its (full) definition.
LF: correct

3.  A port-chain can have multiple flow-classifiers associated with exactly 
the same definition BUT different logical source ports.
LF: correct


4.  The port-chains can be ambiguous, i.e. match on the same classification 
criteria, if and only if there are 0 flow classifiers associated.
LF: If a port-chain has no classifier, then there is no classification so no 
traffic flows through it.

5. The flow classifiers can only be used once, by a single port-chain.
LF: correct.


6.  Different port-chains cannot be associated to different flow 
classifiers that specify the same classification criteria BUT different logical 
source ports (this is https://bugs.launchpad.net/networking-sfc/+bug/1638421).
LF: correct

7. A port-pair's ingress cannot be in use by another port-pair's ingress.
LF: correct a SF port can only be the  ingress port of one port pair.

8. A port-pair's egress cannot be in use by another port-pair's egress.
LF: correct a SF port can only be the  egress port of one port pair.

9. A port-pair can be associated to another port-pair's ingress and egress 
ports BUT swapped (i1=e2, e1=i2).
LF: correct.

10. The port-pairs become "in use" when a port-pair-group associates them, so 
they can't be reused across port-pair-groups.
LF: correct, a port-pair can only be in one port-pair group.

11. A port-chain can include port-pair-groups already associated to other 
port-chains, as long as not the exact same sequence as another port-chain (e.g. 
pc1: [ppg1,ppg2]; pc2: [ppg1] - is fine).
LF: correct.

Best regards,
Igor.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] this week's priorities and subteam reports

2017-02-13 Thread Loo, Ruby
Hi,

We are feverish to present this week's priorities and subteam report for 
Ironic. As usual, this is pulled directly from the Ironic whiteboard[0] and 
formatted.

This Week's Priorities (as of the weekly ironic meeting)

1. Clean up release notes for 7.0.0 https://review.openstack.org/#/c/431188/
1.1. also inspector https://review.openstack.org/#/c/433043/
1.2. also IPA https://review.openstack.org/#/c/433051/
2. portgroups and attach/detach tempest tests: 
https://review.openstack.org/382476
3. review maximum specs, especially those proposed for PTG discussions


Bugs (dtantsur)
===
- Stats (diff between 06 Feb 2017 and 13 Feb 2017)
- Ironic: 220 bugs (-5) + 236 wishlist items (+5). 17 new (-8), 181 in progress 
(-3), 0 critical, 25 high and 29 incomplete (+3)
- Inspector: 14 bugs (+1) + 20 wishlist items. 2 new, 12 in progress, 0 
critical, 1 high (-1) and 5 incomplete (+1)
- Nova bugs with Ironic tag: 11. 1 new (+1), 0 critical, 0 high (-1)

Portgroups support (sambetts, vdrok)

* trello: https://trello.com/c/KvVjeK5j/29-portgroups-support
- status as of most recent weekly meeting:
- tempest tests https://review.openstack.org/382476 need review

CI refactoring (dtantsur, lucasagomes)
==
* trello: https://trello.com/c/c96zb3dm/32-ci-refactoring
- status as of most recent weekly meeting:
- standalone tests proposed by vsaienk0 
https://review.openstack.org/#/c/423556/

Rolling upgrades and grenade-partial (rloo, jlvillal)
=
* trello: 
https://trello.com/c/GAlhSzLm/2-rolling-upgrades-and-grenade-with-multi-node
- status as of most recent weekly meeting:
- bumped to Pike.
- patches need reviews: https://review.openstack.org/#/q/topic:bug/1526283.
- Testing work:
- Grenade + multi-tenant is now working!!!  With a couple grenade 
patches that need to land.
- Next step is: grenade + multi-tenant + multi-NODE.
- As expected, it is failing :( Currently debugging the issue to 
determine what is the reason.

Generic boot-from-volume (TheJulia)
===
* trello: https://trello.com/c/UttNjDB7/13-generic-boot-from-volume
- status as of most recent weekly meeting:
- Bumped to pike
- API side changes for volume connector information has a procedural -2 
until we can begin making use of the data in the conductor, but should stil be 
reviewed
- https://review.openstack.org/#/c/214586/
- This change has been rebased on top of the iPXE template update 
revision to support cinder/iscsi booting.
- Boot from volume/storage cinder interface is up for review
- Julia expecting to rebase these change Monday 2/14
- 
https://review.openstack.org/#/q/status:open+project:openstack/ironic+branch:master+topic:bug/1559691
- Original volume connection information client patches
- They need OSC support added into the revisions.
- These changes should be expected to land once Pike opens.
- 
https://review.openstack.org/#/q/status:open+project:openstack/python-ironicclient+branch:master+topic:bug/1526231

Driver composition (dtantsur, jroll)

* trello: https://trello.com/c/fTya14y6/14-driver-composition
- gerrit topic: https://review.openstack.org/#/q/status:open+topic:bug/1524745
- status as of most recent weekly meeting:
- a job based on the IPMI hardware type is running non-voting
- will probably be rolled into some other job(s) later
- api-ref merged
- next steps (some yet to be written/finished) as of 13 Feb 2017:
- install guide / admin guide docs - TODO
- client changes:
- driver commands update: https://review.openstack.org/419274
- node-update update: https://review.openstack.org/#/c/431542/
- We should agree on some scope for this feature for Ocata, I guess. Maybe 
we call it semi-done when we finish ^^^
- +1, and we can talk at PTG about anything missing, path to getting 
vendor hw types, etc
- UPD: I think we can call it done for Ocata, modulo docs
- (rloo) i thought CI and hw types equivalents for classic drivers 
was still missing

Rescue mode (JayF)
==
- Work on pause until Ocata is cut (no more rebasing until then :D)
* trello: https://trello.com/c/PwH1pexJ/23-rescue-mode
- Bumped to pike.
- Working in devstack! http://imgur.com/a/dqvE2
- 1/30 status
- need reviews on:
- https://review.openstack.org/#/c/350831/ - API/conductor methods 
(tested working)
- https://review.openstack.org/#/c/353156/ - rescuewait timeout 
periodic task
- https://review.openstack.org/#/c/400437/ - agent driver patch (tested 
working)
- https://review.openstack.org/#/c/408341/ - client support patch 
(tested working)
- 

Re: [openstack-dev] [networking-sfc] What resources can and can't be reused

2017-02-13 Thread Duarte Cardoso, Igor
Hi Cathy,

Relax only a couple of them. For Ocata I'm looking at disabling #6 if the 
chain/graph doesn't include sfc proxies (#6 seems to only be necessary if there 
are sfc proxies [1]). For Pike it would be interesting to make port-pair-groups 
completely reusable, as long as the flow classifiers don't make the choice of 
chain ambiguous.

[1] 
http://eavesdrop.openstack.org/meetings/service_chaining/2017/service_chaining.2017-01-12-17.14.log.html

Best regards,
Igor.

From: Cathy Zhang [mailto:cathy.h.zh...@huawei.com]
Sent: Monday, February 13, 2017 7:50 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [networking-sfc] What resources can and can't be 
reused

Hi Igor,

Before we dive into evaluation of the rules you listed below, I would like to 
understand whether you are suggesting to enforce the rules or relax the  
rules/constraints you listed?
Could you clarify it?

Thanks,
Cathy

From: Duarte Cardoso, Igor [mailto:igor.duarte.card...@intel.com]
Sent: Monday, February 13, 2017 11:12 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [networking-sfc] What resources can and can't be reused

Hi networking-sfc,

As part of my work regarding SFC Encapsulation and SFC Graphs, I exercised the 
API to understand exactly what resources can be reused, to possibly relax a few 
of the constraints when a chain is encapsulated end-to-end.
I'm requesting that the leaders and cores take a look at the list below, and 
reply if you see something that doesn't look quite right (or have any other 
comment/question). Thanks!

1. Every flow-classifier must have a logical source port.
2. The flow-classifier must be unique in its (full) definition.
3. A port-chain can have multiple flow-classifiers associated with exactly the 
same definition BUT different logical source ports.
4. The port-chains can be ambiguous, i.e. match on the same classification 
criteria, if and only if there are 0 flow classifiers associated.
5. The flow classifiers can only be used once, by a single port-chain.
6. Different port-chains cannot be associated to different flow classifiers 
that specify the same classification criteria BUT different logical source 
ports (this is https://bugs.launchpad.net/networking-sfc/+bug/1638421).
7. A port-pair's ingress cannot be in use by another port-pair's ingress.
8. A port-pair's egress cannot be in use by another port-pair's egress.
9. A port-pair can be associated to another port-pair's ingress and egress 
ports BUT swapped (i1=e2, e1=i2).
10. The port-pairs become "in use" when a port-pair-group associates them, so 
they can't be reused across port-pair-groups.
11. A port-chain can include port-pair-groups already associated to other 
port-chains, as long as not the exact same sequence as another port-chain (e.g. 
pc1: [ppg1,ppg2]; pc2: [ppg1] - is fine).

Best regards,
Igor.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][glance][glare][all] glance/glare/artifacts/images at the PTG

2017-02-13 Thread Ian Cordasco
-Original Message-
From: Clint Byrum 
Reply: OpenStack Development Mailing List (not for usage questions)

Date: February 13, 2017 at 13:41:00
To: openstack-dev 
Subject:  Re: [openstack-dev] [tc][glance][glare][all]
glance/glare/artifacts/images at the PTG

> Excerpts from Mikhail Fedosin's message of 2017-02-13 18:23:19 +0300:
> > Hello!
> >
> >
> > Let me quickly describe my vision of the problem. I asked this question to
> > Brian last Friday, because it is evident that the projects have the
> > intersection in functionality. For this reason, I proposed to bring Glare
> > back and develop it as a new generation of Glance service. Perhaps such a
> > solution would be more correct from my point of view.
> >
> > Moving away from Glance, let me remind you why we created Glare service.
> >
> > Almost every project works with some binary data and must store it
> > somewhere, and almost always storage itself is not the part of the
> > project's mission. This issue has often been neglected. For this reason
> > there is no single recommended method for storing of binary data, which
> > would have a unified public api and hide all the things of the internal
> > storage infrastructure.
> >
>
> We have an awesome service for storing binary data in a hierarchical
> format in Swift. But it's so generic, it can't really just be the image
> service. But something like Glare is just a way to scope it down and
> give us a way to ask for "just the images" or "just the heat templates",
> which I think is a natural thing for cloud users to want.
>
> > These questions were answered by Glare. First of all, the service allows to
> > use different storages for various types of artifacts - an operator can
> > assign the storage of large files, such as virtual machine images, to
> > Swift, and for relatively small ones, such as Heat templates, use a mysql
> > database.
> >
>
> Meh. Swift isn't exactly slow, or cumbersome for small files.
>
> > Then, we have to admit that data tends to change, so we added a versioning
> > of artifacts and dependencies between them, that the user was convenient to
> > take the data of required version.
> >
>
> Any attempt at versioning that is not git, will frustrate any git user.
> This cat's already out of the bag, but I'd suggest adding git repositories
> as a blob container type and finding a way to allow git to push/pull
> to/from swift. That would be an amazing feature _for swift_ anyway
> (maybe it already exists?) but it would allow Glare to piggy back on all
> of the collective versioning capabilities in Git rather than having to
> chase git.

So the versioning that's present will frustrate everyone. The
reasoning for it is that the original Glare developers found a hack
online to convert the version string into something that a database
can sort (by turning it into one giant integer basically). (I'm
certain that's not the only reason, but when challenged with several
other options they said they couldn't find anyone who had already
found a way to make it sortable on the version.)

That aside, I'm not sure anyone wants git (even git-lfs) managing 50GB
images for them.

> > Often a "binary data" refers to more than one specific object, but a whole
> > lot of files. Therefore, we have implemented the ability to create
> > arbitrary nested folders per one artifact and store multiple files there.
> > And for sure users can receive any file with a single API request.
> >
>
> See above: IMO, use git for this as well and just teach Glare to
> understand git repos rather than having to implement folders in databases.
>
> > For validation and conversion of uploaded data Glare introduces the concept
> > of hooks for the operation. Thus the operator can extend the basic
> > functionality of the system and add integration with third-party systems
> > for each artifact type. For example, for Nokia we implemented integration
> > with custom TOSCA validator.
> >
>
> At first this set off my interop alarm, but it was a false alarm as long
> as this is always limited to 3rd party systems. What worries me is when
> somebody adds one of these for something already in OpenStack, and now
> suddenly a perfectly interoperable app only works right on that one
> special cloud.
>
> > This is just a small overview of the key features of the service. For sure,
> > at the moment Glare is able to do all that Glance can do (except maybe a
> > sharing of artifacts), on the other hand we have added a number of new
> > features, that were requested by cloud operators for a long time.
> >
> > Fyi, now we in Nokia are preparing additional API, which corresponds to the
> > ETSI VNF Packaging Specification [1]. So support of Image v2 API is not an
> > impossible task, and we may implement it as an alternative way of
> > interaction with "Images" artifact type. In this case Nova and other
> > services using Glance are absolutely 

Re: [openstack-dev] [networking-sfc] What resources can and can't be reused

2017-02-13 Thread Cathy Zhang
Hi Igor,

Before we dive into evaluation of the rules you listed below, I would like to 
understand whether you are suggesting to enforce the rules or relax the  
rules/constraints you listed?
Could you clarify it?

Thanks,
Cathy

From: Duarte Cardoso, Igor [mailto:igor.duarte.card...@intel.com]
Sent: Monday, February 13, 2017 11:12 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [networking-sfc] What resources can and can't be reused

Hi networking-sfc,

As part of my work regarding SFC Encapsulation and SFC Graphs, I exercised the 
API to understand exactly what resources can be reused, to possibly relax a few 
of the constraints when a chain is encapsulated end-to-end.
I'm requesting that the leaders and cores take a look at the list below, and 
reply if you see something that doesn't look quite right (or have any other 
comment/question). Thanks!

1. Every flow-classifier must have a logical source port.
2. The flow-classifier must be unique in its (full) definition.
3. A port-chain can have multiple flow-classifiers associated with exactly the 
same definition BUT different logical source ports.
4. The port-chains can be ambiguous, i.e. match on the same classification 
criteria, if and only if there are 0 flow classifiers associated.
5. The flow classifiers can only be used once, by a single port-chain.
6. Different port-chains cannot be associated to different flow classifiers 
that specify the same classification criteria BUT different logical source 
ports (this is https://bugs.launchpad.net/networking-sfc/+bug/1638421).
7. A port-pair's ingress cannot be in use by another port-pair's ingress.
8. A port-pair's egress cannot be in use by another port-pair's egress.
9. A port-pair can be associated to another port-pair's ingress and egress 
ports BUT swapped (i1=e2, e1=i2).
10. The port-pairs become "in use" when a port-pair-group associates them, so 
they can't be reused across port-pair-groups.
11. A port-chain can include port-pair-groups already associated to other 
port-chains, as long as not the exact same sequence as another port-chain (e.g. 
pc1: [ppg1,ppg2]; pc2: [ppg1] - is fine).

Best regards,
Igor.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][glance][glare][all] glance/glare/artifacts/images at the PTG

2017-02-13 Thread Clint Byrum
Excerpts from Mikhail Fedosin's message of 2017-02-13 18:23:19 +0300:
> Hello!
> 
> 
> Let me quickly describe my vision of the problem. I asked this question to
> Brian last Friday, because it is evident that the projects have the
> intersection in functionality. For this reason, I proposed to bring Glare
> back and develop it as a new generation of Glance service. Perhaps such a
> solution would be more correct from my point of view.
> 
> Moving away from Glance, let me remind you why we created Glare service.
> 
> Almost every project works with some binary data and must store it
> somewhere, and almost always storage itself is not the part of the
> project's mission. This issue has often been neglected. For this reason
> there is no single recommended method for storing of binary data, which
> would have a unified public api and hide all the things of the internal
> storage infrastructure.
> 

We have an awesome service for storing binary data in a hierarchical
format in Swift. But it's so generic, it can't really just be the image
service. But something like Glare is just a way to scope it down and
give us a way to ask for "just the images" or "just the heat templates",
which I think is a natural thing for cloud users to want.

> These questions were answered by Glare. First of all, the service allows to
> use different storages for various types of artifacts - an operator can
> assign the storage of large files, such as virtual machine images, to
> Swift, and for relatively small ones, such as Heat templates, use a mysql
> database.
> 

Meh. Swift isn't exactly slow, or cumbersome for small files.

> Then, we have to admit that data tends to change, so we added a versioning
> of artifacts and dependencies between them, that the user was convenient to
> take the data of required version.
> 

Any attempt at versioning that is not git, will frustrate any git user.
This cat's already out of the bag, but I'd suggest adding git repositories
as a blob container type and finding a way to allow git to push/pull
to/from swift. That would be an amazing feature _for swift_ anyway
(maybe it already exists?) but it would allow Glare to piggy back on all
of the collective versioning capabilities in Git rather than having to
chase git.

> Often a "binary data" refers to more than one specific object, but a whole
> lot of files. Therefore, we have implemented the ability to create
> arbitrary nested folders per one artifact and store multiple files there.
> And for sure users can receive any file with a single API request.
> 

See above: IMO, use git for this as well and just teach Glare to
understand git repos rather than having to implement folders in databases.

> For validation and conversion of uploaded data Glare introduces the concept
> of hooks for the operation. Thus the operator can extend the basic
> functionality of the system and add integration with third-party systems
> for each artifact type. For example, for Nokia we implemented integration
> with custom TOSCA validator.
> 

At first this set off my interop alarm, but it was a false alarm as long
as this is always limited to 3rd party systems. What worries me is when
somebody adds one of these for something already in OpenStack, and now
suddenly a perfectly interoperable app only works right on that one
special cloud.

> This is just a small overview of the key features of the service. For sure,
> at the moment Glare is able to do all that Glance can do (except maybe a
> sharing of artifacts), on the other hand we have added a number of new
> features, that were requested by cloud operators for a long time.
> 
> Fyi, now we in Nokia are preparing additional API, which corresponds to the
> ETSI VNF Packaging Specification [1]. So support of Image v2 API is not an
> impossible task, and we may implement it as an alternative way of
> interaction with "Images" artifact type. In this case Nova and other
> services using Glance are absolutely indifferent to what service provides
> Image API.
> 

If you can make it 100% API compatible with Image v2, you'll go a long way
to helping users smoothly switch over.

> All tasks related to documentation and packaging are solvable. We’re
> working on them together with Nokia, so I can assure you that the documents
> and packages will be available this spring. The same story is for Ansible
> and Puppet.
> 
> Now back again to our question. What I'd like is that Glare will receive
> due recognition. Doing a project on the outskirts of OpenStack is not I
> really want to. Therefore, it would be nice to develop Glare as a natural
> evolution of Glance, associated with the requirements of operators and the
> market in general. For Glance team is a good chance to try something new
> and interesting, and of course gain new experience.
> 

I support you in your attempt to have a natural evolution. I think
it's going to be harder and harder the longer you're developing Glare's
features without pushing for a transition to 

[openstack-dev] [Tacker] Tacker upgrade vnf

2017-02-13 Thread lương hữu tuấn
Hi Tacker folks,

I have my own consideration about vnf upgrade. Please correct me if i am
wrong.

As i saw that tacker supports the vnf update when vnfd and vnfd_id do not
change. What about the situation of upgrading vnf, for instance, i would
like to upgrade my vnf which is deployed with one virtual machine to the
new version which has a cluster of virtual machine, or new flavor, or new
resource model of heat stack. My question is such a this process is called
SCALE in tacker?

Br,

Nokia/Tuan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [networking-sfc] What resources can and can't be reused

2017-02-13 Thread Duarte Cardoso, Igor
Hi networking-sfc,

As part of my work regarding SFC Encapsulation and SFC Graphs, I exercised the 
API to understand exactly what resources can be reused, to possibly relax a few 
of the constraints when a chain is encapsulated end-to-end.
I'm requesting that the leaders and cores take a look at the list below, and 
reply if you see something that doesn't look quite right (or have any other 
comment/question). Thanks!

1. Every flow-classifier must have a logical source port.
2. The flow-classifier must be unique in its (full) definition.
3. A port-chain can have multiple flow-classifiers associated with exactly the 
same definition BUT different logical source ports.
4. The port-chains can be ambiguous, i.e. match on the same classification 
criteria, if and only if there are 0 flow classifiers associated.
5. The flow classifiers can only be used once, by a single port-chain.
6. Different port-chains cannot be associated to different flow classifiers 
that specify the same classification criteria BUT different logical source 
ports (this is https://bugs.launchpad.net/networking-sfc/+bug/1638421).
7. A port-pair's ingress cannot be in use by another port-pair's ingress.
8. A port-pair's egress cannot be in use by another port-pair's egress.
9. A port-pair can be associated to another port-pair's ingress and egress 
ports BUT swapped (i1=e2, e1=i2).
10. The port-pairs become "in use" when a port-pair-group associates them, so 
they can't be reused across port-pair-groups.
11. A port-chain can include port-pair-groups already associated to other 
port-chains, as long as not the exact same sequence as another port-chain (e.g. 
pc1: [ppg1,ppg2]; pc2: [ppg1] - is fine).

Best regards,
Igor.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gnocchi]

2017-02-13 Thread Mahir Gunyel


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] TripleO Ocata release blockers

2017-02-13 Thread Ben Nemec



On 02/12/2017 02:24 PM, Emilien Macchi wrote:

Quick updates:

- All FFEs patches have been merged except:
https://review.openstack.org/#/c/330050/ - so we're green on this
side.
- Upgrade team is still working on automation to upgrade from newton to ocata.
- ovb-updates (CI job with IPv6) doesn't pass -
https://bugs.launchpad.net/tripleo/+bug/1663187 - See
https://review.openstack.org/#/c/432761/ for a potential fix.
- our CI is testing OpenStack from trunk, we have regular promotions.

At this stage, I believe we can release TripleO Ocata RC1 by Thursday
17th, considering Congress will be landed, 1663187 fixed, no new
blocker in CI.

I think upgrade folks will still have some patches after Thursday, but
I don't think it's a big deal, we'll just backport them to
stable/ocata.

Please let us know asap if you see more blockers for Ocata RC1.


I would like to propose https://bugs.launchpad.net/tripleo/+bug/1664331 
as a release blocker, mostly because if someone ran an undercloud 
upgrade without us fixing that I believe it could break their undercloud 
Heat in a way that would be difficult to recover from (should probably 
discuss that with the Heat team though to make sure I'm not wrong about 
the impact).


If we can't fix it before release, then I think we should at least merge 
the _stackrc_exists fix and essentially block undercloud upgrades until 
we fix the member role logic.  That way we can backport the fix knowing 
that no one has run a bad upgrade.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][glance][glare][all] glance/glare/artifacts/images at the PTG

2017-02-13 Thread Monty Taylor
On 02/13/2017 07:47 AM, Ian Cordasco wrote:
> -Original Message-
> From: Clint Byrum 
> Reply: OpenStack Development Mailing List (not for usage questions)
> 
> Date: February 12, 2017 at 20:09:04
> To: openstack-dev 
> Subject:  Re: [openstack-dev] [tc][glance][glare][all]
> glance/glare/artifacts/images at the PTG
> 
>> Excerpts from Brian Rosmaita's message of 2017-02-10 12:39:11 -0500:
>>> I want to give all interested parties a heads up that I have scheduled a
>>> session in the Macon room from 9:30-10:30 a.m. on Thursday morning
>>> (February 23).
>>>
>>> Here's what we need to discuss. This is from my perspective as Glance
>>> PTL, so it's going to be Glance-centric. This is a quick narrative
>>> description; please go to the session etherpad [0] to turn this into a
>>> specific set of discussion items.
>>>
>>> Glance is the OpenStack image cataloging and delivery service. A few
>>> cycles ago (Juno?), someone noticed that maybe Glance could be
>>> generalized so that instead of storing image metadata and image data,
>>> Glance could store arbitrary digital "stuff" along with metadata
>>> describing the "stuff". Some people (like me) thought that this was an
>>> obvious direction for Glance to take, but others (maybe wiser, cooler
>>> heads) thought that Glance needed to focus on image cataloging and
>>> delivery and make sure it did a good job at that. Anyway, the Glance
>>> mission statement was changed to include artifacts, but the Glance
>>> community never embraced them 100%, and in Newton, Glare split off as
>>> its own project (which made sense to me, there was too much unclarity in
>>> Glance about how Glare fit in, and we were holding back development, and
>>> besides we needed to focus on images), and the Glance mission statement
>>> was re-amended specifically to exclude artifacts and focus on images and
>>> metadata definitions.
>>>
>>> OK, so the current situation is:
>>> - Glance "does" image cataloging and delivery and metadefs, and that's
>>> all it does.
>>> - Glare is an artifacts service (cataloging and delivery) that can also
>>> handle images.
>>>
>>> You can see that there's quite a bit of overlap. I gave you the history
>>> earlier because we did try to work as a single project, but it did not
>>> work out.
>>>
>>> So, now we are in 2017. The OpenStack development situation has been
>>> fragile since the second half of 2016, with several big OpenStack
>>> sponsors pulling way back on the amount of development resources being
>>> contributed to the community. This has left Glare in the position where
>>> it cannot qualify as a Bit Tent project, even though there is interest
>>> in artifacts.
>>>
>>> Mike Fedosin, the PTL for Glare, has asked me about Glare becoming part
>>> of the Glance project again. I will be completely honest, I am inclined
>>> to say "no". I have enough problems just getting Glance stuff done (for
>>> example, image import missed Ocata). But in addition to doing what's
>>> right for Glance, I want to do what's right for OpenStack. And I look
>>> at the overlap and think ...
>>>
>>> Well, what I think is that I don't want to go through the Juno-Newton
>>> cycles of argument again. And we have to do what is right for our users.
>>>
>>> The point of this session is to discuss:
>>> - What does the Glance community see as the future of Glance?
>>> - What does the wider OpenStack community (TC) see as the future of Glance?
>>> - Maybe, more importantly, what does the wider community see as the
>>> obligations of Glance?
>>> - Does Glare fit into this vision?
>>> - What kind of community support is there for Glare?
>>>
>>> My reading of Glance history is that while some people were on board
>>> with artifacts as the future of Glance, there was not a sufficient
>>> critical mass of the Glance community that endorsed this direction and
>>> that's why things unravelled in Newton. I don't want to see that happen
>>> again. Further, I don't think the Glance community got the word out to
>>> the broader OpenStack community about the artifacts project, and we got
>>> a lot of pushback along the lines of "WTF? Glance needs to do images"
>>> variety. And probably rightly so -- Glance needs to do images. My
>>> point is that I don't want Glance to take Glare back unless it fits in
>>> with what the community sees as the appropriate direction for Glance.
>>> And I certainly don't want to take it back if the entire Glance
>>> community is not on board.
>>>
>>> Anyway, that's what we're going to discuss. I've booked one of the
>>> fishbowl rooms so we can get input from people beyond just the Glance
>>> and Glare projects.
>>>
>>
>> Does anybody else feel like this is deja vu of Neutron's inception?
>>
>> While I understand sometimes there are just incompatibilities in groups,
>> I think we should probably try again. Unfortunately, it sounds like
>> Glare already did the Neutron 

[openstack-dev] [ironic][ptg] Evening gathering and Attendance

2017-02-13 Thread Julia Kreger
Greetings everyone!

We are attempting to put together a team gathering for one evening during the 
PTG.  As such, we’ve created a doodle poll[0].  Please fill out the poll as 
soon as possible so we can determine the number of people who will be up for 
attending an evening gathering, as well as the best day/time to schedule a 
gathering for.

Also, on our planning ether pad, we have an attendance section[1] starting 
around line 82.  Please add yourself if you have not already done so.

-Julia

[0] http://doodle.com/poll/urmpt82pax77vmuz 

[1] https://etherpad.openstack.org/p/ironic-pike-ptg 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [monasca] Log Query API Design

2017-02-13 Thread Steve Simpson
Hi,

For those interested, I have been expanding on the initial design for
a simple query API to add to monasca-log-api. The design is available
here:

https://wiki.openstack.org/wiki/Monasca/Logging/Query_API_Design#Design:_Log_Listing

I have proposed the initial "Log Listing" part of the API here:

https://review.openstack.org/#/c/433016/

Would appreciate comments especially from those currently deploying
the monasca-log-api.

Cheers,
Steve Simpson
(stevejims Launchpad/IRC)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][glance][glare][all] glance/glare/artifacts/images at the PTG

2017-02-13 Thread Mikhail Fedosin
Okay, it seems I did not express myself very well :) But then there is a
rhetorical question: if it's officially recommended to use the service with
the name that starts with "S", why does Nova use a service with the name
beginning with "G" to store its images?

As you may know Glare is a proxy to Swift, Ceph and other possible cloud
storages, and it provides an abstraction (artifact) upon them + several
additional features (like custom data validation and conversion) that Swift
doesn't and shouldn't have. And for sure I was talking about secure and
customizable catalog of binary data with its metadata, and not the concrete
storage implementation. Sorry again for this confusion :)

Best,
Mike

On Mon, Feb 13, 2017 at 8:04 PM, Ian Cordasco 
wrote:

> -Original Message-
> From: Jeremy Stanley 
> Reply: OpenStack Development Mailing List (not for usage questions)
> 
> Date: February 13, 2017 at 10:14:24
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject:  Re: [openstack-dev] [tc][glance][glare][all]
> glance/glare/artifacts/images at the PTG
>
> > On 2017-02-13 18:23:19 +0300 (+0300), Mikhail Fedosin wrote:
> > [...]
> > > Almost every project works with some binary data and must store it
> > > somewhere, and almost always storage itself is not the part of the
> > > project's mission. This issue has often been neglected. For this reason
> > > there is no single recommended method for storing of binary data, which
> > > would have a unified public api and hide all the things of the internal
> > > storage infrastructure.
> > [...]
> >
> > If you'll forgive the sarcasm, it sounds like you're proposing that
> > OpenStack components should be able to rely on the existence of a
> > standard service suitable for generalized storage and retrieval of
> > arbitrary blobs of data through an API. Our trademark
> > interoperability requirements may even guarantee the presence of one
> > already in any compliant deployment; I'll have to check... ;)
>
> Well it's a storage service, so I hope the name doesn't start with "S". ;D
>
> --
> Ian Cordasco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][glance][glare][all] glance/glare/artifacts/images at the PTG

2017-02-13 Thread Ian Cordasco
-Original Message-
From: Jeremy Stanley 
Reply: OpenStack Development Mailing List (not for usage questions)

Date: February 13, 2017 at 10:14:24
To: OpenStack Development Mailing List (not for usage questions)

Subject:  Re: [openstack-dev] [tc][glance][glare][all]
glance/glare/artifacts/images at the PTG

> On 2017-02-13 18:23:19 +0300 (+0300), Mikhail Fedosin wrote:
> [...]
> > Almost every project works with some binary data and must store it
> > somewhere, and almost always storage itself is not the part of the
> > project's mission. This issue has often been neglected. For this reason
> > there is no single recommended method for storing of binary data, which
> > would have a unified public api and hide all the things of the internal
> > storage infrastructure.
> [...]
>
> If you'll forgive the sarcasm, it sounds like you're proposing that
> OpenStack components should be able to rely on the existence of a
> standard service suitable for generalized storage and retrieval of
> arbitrary blobs of data through an API. Our trademark
> interoperability requirements may even guarantee the presence of one
> already in any compliant deployment; I'll have to check... ;)

Well it's a storage service, so I hope the name doesn't start with "S". ;D

--
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [ptg] [goals] Pike WSGI Goal Planning

2017-02-13 Thread Thierry Carrez
Emilien Macchi wrote:
> I created https://etherpad.openstack.org/p/ptg-pike-wsgi so we can
> start discussing on this goal.
> Thierry confirmed to me that we would have a room on either Monday or
> Tuesday. Please let us know in the etherpad if you have schedule
> constraints.

Sorry if I was unclear... you actually have the room available on both
days !

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [vpnaas] vpnaas no longer part of the neutron governance

2017-02-13 Thread Doug Hellmann
Excerpts from Takashi Yamamoto's message of 2017-02-06 10:32:10 +0900:
> On Wed, Feb 1, 2017 at 9:46 AM, Takashi Yamamoto  
> wrote:
> > hi,
> >
> > On Fri, Jan 27, 2017 at 7:46 AM, Doug Hellmann  
> > wrote:
> >> Excerpts from Takashi Yamamoto's message of 2017-01-26 11:42:48 +0900:
> >>> hi,
> >>>
> >>> On Sat, Jan 14, 2017 at 2:17 AM, Doug Hellmann  
> >>> wrote:
> >>> > Excerpts from Dariusz Śmigiel's message of 2017-01-13 09:11:01 -0600:
> >>> >> 2017-01-12 21:43 GMT-06:00 Takashi Yamamoto :
> >>> >> > hi,
> >>> >> >
> >>> >> > On Wed, Nov 16, 2016 at 11:02 AM, Armando M.  
> >>> >> > wrote:
> >>> >> >> Hi
> >>> >> >>
> >>> >> >> As of today, the project neutron-vpnaas is no longer part of the 
> >>> >> >> neutron
> >>> >> >> governance. This was a decision reached after the project saw a 
> >>> >> >> dramatic
> >>> >> >> drop in active development over a prolonged period of time.
> >>> >> >>
> >>> >> >> What does this mean in practice?
> >>> >> >>
> >>> >> >> From a visibility point of view, release notes and documentation 
> >>> >> >> will no
> >>> >> >> longer appear on openstack.org as of Ocata going forward.
> >>> >> >> No more releases will be published by the neutron release team.
> >>> >> >> The neutron team will stop proposing fixes for the upstream CI, if 
> >>> >> >> not
> >>> >> >> solely on a voluntary basis (e.g. I still felt like proposing [2]).
> >>> >> >>
> >>> >> >> How does it affect you, the user or the deployer?
> >>> >> >>
> >>> >> >> You can continue to use vpnaas and its CLI via the 
> >>> >> >> python-neutronclient and
> >>> >> >> expect it to work with neutron up until the newton
> >>> >> >> release/python-neutronclient 6.0.0. After this point, if you want a 
> >>> >> >> release
> >>> >> >> that works for Ocata or newer, you need to proactively request a 
> >>> >> >> release
> >>> >> >> [5], and reach out to a member of the neutron release team [3] for 
> >>> >> >> approval.
> >>> >> >
> >>> >> > i want to make an ocata release. (and more importantly the stable 
> >>> >> > branch,
> >>> >> > for the benefit of consuming subprojects)
> >>> >> > for the purpose, the next step would be ocata-3, right?
> >>> >>
> >>> >> Hey Takashi,
> >>> >> If you want to release new version of neutron-vpnaas, please look at 
> >>> >> [1].
> >>> >> This is the place, which you need to update and based on provided
> >>> >> details, tags and branches will be cut.
> >>> >>
> >>> >> [1] 
> >>> >> https://github.com/openstack/releases/blob/master/deliverables/ocata/neutron-vpnaas.yaml
> >>> >
> >>> > Unfortunately, since vpnaas is no longer part of an official project,
> >>> > we won't be using the releases repository to manage and publish
> >>> > information about the releases. It'll need to be done by hand.
> >>>
> >>> who can/should do it by hand?
> >>
> >> I can do it. Let me know the version number, and for each repository the
> >> SHA of the commit on the master branch to be tagged.
> 
> please make it with the following.  thank you!
> 
> stable/ocata
> 10.0.0
> openstack/neutron-vpnaas
> d6db1238a4950df03dfb28acabcf4df14ebfa3ac

Sorry, I missed this email earlier.

Do you want 10.0.0 or 10.0.0.0rc1?

Doug

> 
> >
> > thank you. i'll ask you when necessary.
> >
> > i think it's fine to just make a branch from master when stable branch is 
> > cut
> > for neutron.  how others think?
> >
> >>
> >> Doug
> >>
> >>>
> >>> >
> >>> > Doug
> >>> >
> >>> >>
> >>> >> BR, Dariusz
> >>> >>
> >>> >
> >>> > __
> >>> > OpenStack Development Mailing List (not for usage questions)
> >>> > Unsubscribe: 
> >>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>
> >> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [arch] API WG PTG planning

2017-02-13 Thread Chris Dent

On Mon, 13 Feb 2017, Thierry Carrez wrote:


So we have indeed an extra room that we could dedicate to the API WG
(Monday and/or Tuesday) -- the only drawback is that it won't appear in
printed maps or schedule since those were sent to print already.


ttx and I talked about this in IRC and came up with what seemed like
a good plan:

* have a room for API-WG on Monday
* continue sharing space with the arch-wg in the Tuesday room they
  already have

There are several topics that are of interest to both groups (as
shown on the arch-wg etherpad [1]) so it will be good to be able to
maintain the overlap.

This should allow us to have two days of API and architecture
related discussions with as broad a group as possible. Let's
continue to use the one etherpad [1] to organize topics. The more we
are able to formulate a bit of an agenda beforehand, the more we can
be sure that the people who want to show up are able to do so.

[1] https://etherpad.openstack.org/p/ptg-architecture-workgroup

--
Chris Dent ¯\_(ツ)_/¯   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API WG PTG planning

2017-02-13 Thread Thierry Carrez
Thierry Carrez wrote:
> Ed Leafe wrote:
>> On Feb 10, 2017, at 2:48 PM, Matt Riedemann  wrote:
>>
>>> I assumed we'd take the opportunity to talk about capabilities [1] at the 
>>> PTG but couldn't find any etherpad for the API WG on the wiki [2].
>>>
>>> Is the API WG getting together on Monday or Tuesday?
>>>
>>> [1] https://review.openstack.org/#/c/386555/
>>> [2] https://wiki.openstack.org/wiki/PTG/Pike/Etherpads
>>
>> We weren’t listed on the etherpad listing, so we didn’t know if we could 
>> take a slot. So we asked the Architecture WG if we could share space with 
>> them. The capabilities discussion is one of the ones we are planning on:
>>
>> https://etherpad.openstack.org/p/ptg-architecture-workgroup
> 
> Yes, that was a bit of an oversight, difficult to fix one week before.
> 
> Just in case, I'll check today if we have a spare room on one of those
> days, but if not it's probably safe to leverage the Arch WG room (on
> Tuesday) or one of the reservable discussion rooms.

So we have indeed an extra room that we could dedicate to the API WG
(Monday and/or Tuesday) -- the only drawback is that it won't appear in
printed maps or schedule since those were sent to print already.

Let me know if you want it (and the days you want it) and I'll make sure
it's set aside for you.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry] Pollsters for Veritas HyperScale

2017-02-13 Thread gordon chung


On 10/02/17 12:24 AM, Nirendra Awasthi wrote:

> Overview:
> * HyperScale pollsters are required to collect and monitor HyperScale
> storage statistics.
> * Collected statistics can be visualized in HyperScale dashboard.
> * Pollsters are deployed on all the managed computes, HyperScale data
> plane and HyperScale control plane.
> * Pollsters are driven by a separate HyperScale service
>
> Pollsters:
> HyperScale pollsters are divided into following categories:
> - Compute plane pollsters
> - Data plane pollsters
> - Tenant pollsters
> - Cloud pollsters

i assume this is a vendor specific solution. we don't/prefer not to 
store drivers in Ceilometer. you can take a look at how powervm does 
this[1]. they basically extend the existing interfaces we provide[2]. i 
suspect you want to do something similar. if there's an interface 
missing, feel free to propose it.

for everything else, we detect pollsters to load via entry_points[3][4]. 
i imagine you want to leverage that and add your own pollster in same 
namespace for Ceilometer to pickup.

lastly, the polling control is handled by polling.yaml and not the 
pipeline.yaml[5]

you're welcome to post your solution to ceilometer repo. just know, all 
the driver specific code will be probably pushed outside the repo.


[1] https://github.com/openstack/ceilometer-powervm
[2] 
https://github.com/openstack/ceilometer/blob/master/ceilometer/compute/virt/inspector.py
[3] https://github.com/openstack/ceilometer/blob/master/setup.cfg#L80
[4] https://github.com/openstack/ceilometer/blob/master/setup.cfg#L143
[5] 
https://github.com/openstack/ceilometer/blob/master/etc/ceilometer/polling.yaml

cheers,
-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][glance][glare][all] glance/glare/artifacts/images at the PTG

2017-02-13 Thread Jeremy Stanley
On 2017-02-13 18:23:19 +0300 (+0300), Mikhail Fedosin wrote:
[...]
> Almost every project works with some binary data and must store it
> somewhere, and almost always storage itself is not the part of the
> project's mission. This issue has often been neglected. For this reason
> there is no single recommended method for storing of binary data, which
> would have a unified public api and hide all the things of the internal
> storage infrastructure.
[...]

If you'll forgive the sarcasm, it sounds like you're proposing that
OpenStack components should be able to rely on the existence of a
standard service suitable for generalized storage and retrieval of
arbitrary blobs of data through an API. Our trademark
interoperability requirements may even guarantee the presence of one
already in any compliant deployment; I'll have to check... ;)
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Blueprints for DPDK in OvS2.6

2017-02-13 Thread Emilien Macchi
On Wed, Feb 8, 2017 at 1:59 AM, Saravanan KR  wrote:
> Hello,
>
> We have raised 2 BP for OvS2.6 integration with DPDK support.
>
> Basic Migration -
> https://blueprints.launchpad.net/tripleo/+spec/ovs-2.6-dpdk (Targeted
> for March)
> OvS 2.6 Features -
> https://blueprints.launchpad.net/tripleo/+spec/ovs-2.6-features-dpdk
> (Targeted for Pike)

Both links are 404, any idea of what happenned?

Other than that, I don't see any blocker to have these blueprints in Pike cycle.

Thanks!

> We find the changes to be straight forward and minor. And the required
> changes has been updated on the BP description. Please let us know if
> it requires a spec.
>
> Regards,
> Saravanan KR
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] Reminder meeting on Feb 14th @ 1500 UTC

2017-02-13 Thread Alex Schultz
Just a reminder but we will be having our meeting tomorrow Feb 14th at
1500 UTC in #openstack-meeting-4 unless the agenda[0] remains empty.
So if you have something you wish to talk about it, please add it to
the agenda.  As always the the previous meeting logs and agendas are
reviewable here[1].  We will most likely not have a meeting next week
due to the PTG.


Thanks,
-Alex

[0] https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20170214
[1] http://docs.openstack.org/developer/puppet-openstack-guide/meetings.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [governance][tc][rally] Rally (Re: [rally][release] How rally release version)

2017-02-13 Thread Davanum Srinivas
Andrey,

My apologies. Thanks for the clarifications.

-- Dims

On Mon, Feb 13, 2017 at 10:27 AM, Andrey Kurilin  wrote:
>
>
> On Mon, Feb 13, 2017 at 4:49 PM, Davanum Srinivas  wrote:
>>
>> Andrey,
>>
>> It's a question, not a proposal.
>
>
> Sorry, but your message doesn't sound like a question. It sounds like a
> polite proposal.
>
>>
>> If you don't do use releases repo, then rally information will be stale
>> here:
>> https://releases.openstack.org/
>>
>> Here's why we have a release team and things we do:
>>
>> https://governance.openstack.org/tc/reference/projects/release-management.html
>>
>> https://specs.openstack.org/openstack-infra/infra-specs/specs/centralize-release-tagging.html
>> http://docs.openstack.org/project-team-guide/release-management.html
>>
>> Since Rally is following the independent model, it has a whole lot of
>> leeway:
>>
>> http://docs.openstack.org/project-team-guide/release-management.html#independent-release-model
>>
>> However the absolute minimum requirement is that the releases repo
>> should be kept in sync. If that is too cumbersome, i don't know...
>>
>
> Please re-read your first mail in the thread. It is not about synchronizing
> release notes at all. You are
> asking about throwing Rally project out from Big Tent due to ourdated info
> posted at https://releases.openstack.org.
> Seriusly?
>
> To finish this topic:
>
> - As a Rally PTL, I do not want to drop the project out of Big Tent. At
> least for now. Who knows what will happen
>   when you will extend bureaucracy someday...
>
> - I'll update info at openstack/releases repo soon and will try to do it in
> the right time while releasing new versions.
>
> - Rally follows the rules described by:
>   *
> http://docs.openstack.org/project-team-guide/release-management.html#independent-release-model
>   *
> http://docs.openstack.org/project-team-guide/release-management.html#release-process-for-other-projects
>
> > OpenStack projects following a cycle-independent model can ... push
> signed tags by themselves.
>
> - There are no reasons to continue discussion dropping Rally. I can start a
> ton of topics like "Drop Nova" with similar "real"
>   reasons.
>
> - I'm not against the work done by OpenStack Release team. I had a good
> experience with that while working in not-Rally projects.
>
>>
>> Thanks,
>> Dims
>>
>>
>>
>> On Mon, Feb 13, 2017 at 9:15 AM, Andrey Kurilin 
>> wrote:
>> > Hi Dims!
>> >
>> > I do not want to say except question "why you are proposing to do
>> > that?".
>> >
>> > I tried to find any rules about releasing at
>> > https://governance.openstack.org, but I could not.
>> > Please point me what rule I broke.
>> >
>> > On Mon, Feb 13, 2017 at 3:57 PM, Davanum Srinivas 
>> > wrote:
>> >>
>> >> Andrey, Rally team,
>> >>
>> >> Do you want Rally to be dropped from Governance and Releases
>> >> repository?
>> >>
>> >> Thanks,
>> >> Dims
>> >>
>> >> On Mon, Feb 13, 2017 at 8:17 AM, Jeffrey Zhang
>> >> 
>> >> wrote:
>> >> > @Andrey, thanks for the explanation.
>> >> >
>> >> > The issue is: how can i know which tag works with certain OpenStack
>> >> > branch?
>> >> > For example, which tag
>> >> > works with OpenStack Ocata release? There is no place recording this.
>> >> >
>> >> > I also found there are some other project do not use
>> >> > "project/releases"
>> >> > project. May they are using
>> >> > the same solution as rally.
>> >> >
>> >> > On Mon, Feb 13, 2017 at 6:14 PM, Andrey Kurilin
>> >> > 
>> >> > wrote:
>> >> >>
>> >> >> Hi Jeffrey,
>> >> >>
>> >> >> Rally team do not use "releases" repo at all, that is why
>> >> >> information
>> >> >> at
>> >> >> [2] is outdated.
>> >> >> Our release workflow is making a proper tag and pushing it via
>> >> >> Gerrit.
>> >> >> I
>> >> >> find it more convenient.
>> >> >>
>> >> >>
>> >> >>
>> >> >> On Mon, Feb 13, 2017 at 9:13 AM, Jeffrey Zhang
>> >> >> 
>> >> >> wrote:
>> >> >>>
>> >> >>> Hey guys,
>> >> >>>
>> >> >>> I found rally already releases its 0.8.1 tag from[0][1]. But
>> >> >>> I found nothing in openstack/releases project[2]. How rally
>> >> >>> create tag?
>> >> >>>
>> >> >>> [0] http://tarballs.openstack.org/rally/
>> >> >>> [1] https://github.com/openstack/rally/releases
>> >> >>> [2]
>> >> >>>
>> >> >>>
>> >> >>> https://github.com/openstack/releases/blob/master/deliverables/_independent/rally.yaml#L45
>> >> >>>
>> >> >>> --
>> >> >>> Regards,
>> >> >>> Jeffrey Zhang
>> >> >>> Blog: http://xcodest.me
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>> __
>> >> >>> OpenStack Development Mailing List (not for usage questions)
>> >> >>> Unsubscribe:
>> >> >>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >> >>>
>> >> >>
>> 

[openstack-dev] [kolla] Debian support?

2017-02-13 Thread Marcin Juszkiewicz
As part of my Linaro work I work on getting Debian in Kolla great again.
Posted set of patches for review [1] which make it work properly.

1. https://review.openstack.org/#/q/owner:%22Marcin+Juszkiewicz

Images are quite in sync with Ubuntu ones. There are some exceptions
when builds are done against 'jessie-backports' tag due to some packages
not being present in Debian/stable.

So far I built 158 containers for kolla/debian:jessie-backports combo.

The question is: are there people interested in using Kolla to build
container images for Debian/jessie-backports? I plan to move to
Debian/stretch during next weeks and do not use current stable release.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally][ironic][release] How rally/ironic release version

2017-02-13 Thread Dmitry Tantsur
This is right, we're release with the help of the release team and this 
repository. The files do not exist, because we haven't had any releases in Ocata 
for these projects yet (we're not milestone based).


On 02/13/2017 04:27 PM, Jeffrey Zhang wrote:

@Dmitry, thanks.

So ironic will use "openstack/releases" for release management? right?

On Mon, Feb 13, 2017 at 11:19 PM, Dmitry Tantsur > wrote:

We're still not done :( But we're very close! Please expect releases later
today or tomorrow.

On 02/13/2017 04:09 PM, Jeffrey Zhang wrote:

loop ironic tag in subject.

i do not see any release info in Ocata cycle in ironic[0] and
ironic-inspector[1] project.


[0]

https://github.com/openstack/releases/blob/master/deliverables/ocata/ironic.yaml


[1]

https://github.com/openstack/releases/blob/master/deliverables/ocata/ironic-inspector.yaml



On Mon, Feb 13, 2017 at 11:02 PM, Davanum Srinivas 
>> wrote:

To add, please consider appointing a CPL to coordinate the update to
the releases/ repo:

https://wiki.openstack.org/wiki/CrossProjectLiaisons#Release_management



>

Thanks,
Dims

On Mon, Feb 13, 2017 at 9:15 AM, Doug Hellmann

>> 
wrote:
> Excerpts from Andrey Kurilin's message of 2017-02-13 12:14:31 
+0200:
>> Hi Jeffrey,
>>
>> Rally team do not use "releases" repo at all, that is why
information at
>> [2] is outdated.
>> Our release workflow is making a proper tag and pushing it via
Gerrit. I
>> find it more convenient.
>
> Please propose an update to openstack/releases to either update 
the
> history information for Rally's releases or to delete the relevant
file
> entirely to avoid confusion.
>
> Doug
>
>>
>>
>>
>> On Mon, Feb 13, 2017 at 9:13 AM, Jeffrey Zhang

>>
>> wrote:
>>
>> > Hey guys,
>> >
>> > I found rally already releases its 0.8.1 tag from[0][1]. But
>> > I found nothing in openstack/releases project[2]. How rally
>> > create tag?
>> >
>> > [0] http://tarballs.openstack.org/rally/

>
>> > [1] https://github.com/openstack/rally/releases

>
>> > [2]
https://github.com/openstack/releases/blob/master/deliverables/_

>
>> > independent/rally.yaml#L45
>> >
>> > --
>> > Regards,
>> > Jeffrey Zhang
>> > Blog: http://xcodest.me
>> >
>> >

__
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe


>
>> >
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Re: [openstack-dev] [governance][tc][rally] Rally (Re: [rally][release] How rally release version)

2017-02-13 Thread Andrey Kurilin
On Mon, Feb 13, 2017 at 4:49 PM, Davanum Srinivas  wrote:

> Andrey,
>
> It's a question, not a proposal.
>

Sorry, but your message doesn't sound like a question. It sounds like a
polite proposal.


> If you don't do use releases repo, then rally information will be stale
> here:
> https://releases.openstack.org/
>
> Here's why we have a release team and things we do:
> https://governance.openstack.org/tc/reference/projects/relea
> se-management.html
> https://specs.openstack.org/openstack-infra/infra-specs/spec
> s/centralize-release-tagging.html
> http://docs.openstack.org/project-team-guide/release-management.html
>
> Since Rally is following the independent model, it has a whole lot of
> leeway:
> http://docs.openstack.org/project-team-guide/release-managem
> ent.html#independent-release-model
>
> However the absolute minimum requirement is that the releases repo
> should be kept in sync. If that is too cumbersome, i don't know...
>
>
Please re-read your first mail in the thread. It is not about synchronizing
release notes at all. You are
asking about throwing Rally project out from Big Tent due to ourdated info
posted at https://releases.openstack.org.
Seriusly?

To finish this topic:

- As a Rally PTL, I do not want to drop the project out of Big Tent. At
least for now. Who knows what will happen
  when you will extend bureaucracy someday...

- I'll update info at openstack/releases repo soon and will try to do it in
the right time while releasing new versions.

- Rally follows the rules described by:
  * http://docs.openstack.org/project-team-guide/release-managem
ent.html#independent-release-model
  *
http://docs.openstack.org/project-team-guide/release-management.html#release-process-for-other-projects

> OpenStack projects following a cycle-independent model can ... push
signed tags by themselves.

- There are no reasons to continue discussion dropping Rally. I can start a
ton of topics like "Drop Nova" with similar "real"
  reasons.

- I'm not against the work done by OpenStack Release team. I had a good
experience with that while working in not-Rally projects.


> Thanks,
> Dims
>
>
>
> On Mon, Feb 13, 2017 at 9:15 AM, Andrey Kurilin 
> wrote:
> > Hi Dims!
> >
> > I do not want to say except question "why you are proposing to do that?".
> >
> > I tried to find any rules about releasing at
> > https://governance.openstack.org, but I could not.
> > Please point me what rule I broke.
> >
> > On Mon, Feb 13, 2017 at 3:57 PM, Davanum Srinivas 
> wrote:
> >>
> >> Andrey, Rally team,
> >>
> >> Do you want Rally to be dropped from Governance and Releases repository?
> >>
> >> Thanks,
> >> Dims
> >>
> >> On Mon, Feb 13, 2017 at 8:17 AM, Jeffrey Zhang  >
> >> wrote:
> >> > @Andrey, thanks for the explanation.
> >> >
> >> > The issue is: how can i know which tag works with certain OpenStack
> >> > branch?
> >> > For example, which tag
> >> > works with OpenStack Ocata release? There is no place recording this.
> >> >
> >> > I also found there are some other project do not use
> "project/releases"
> >> > project. May they are using
> >> > the same solution as rally.
> >> >
> >> > On Mon, Feb 13, 2017 at 6:14 PM, Andrey Kurilin <
> akuri...@mirantis.com>
> >> > wrote:
> >> >>
> >> >> Hi Jeffrey,
> >> >>
> >> >> Rally team do not use "releases" repo at all, that is why information
> >> >> at
> >> >> [2] is outdated.
> >> >> Our release workflow is making a proper tag and pushing it via
> Gerrit.
> >> >> I
> >> >> find it more convenient.
> >> >>
> >> >>
> >> >>
> >> >> On Mon, Feb 13, 2017 at 9:13 AM, Jeffrey Zhang
> >> >> 
> >> >> wrote:
> >> >>>
> >> >>> Hey guys,
> >> >>>
> >> >>> I found rally already releases its 0.8.1 tag from[0][1]. But
> >> >>> I found nothing in openstack/releases project[2]. How rally
> >> >>> create tag?
> >> >>>
> >> >>> [0] http://tarballs.openstack.org/rally/
> >> >>> [1] https://github.com/openstack/rally/releases
> >> >>> [2]
> >> >>>
> >> >>> https://github.com/openstack/releases/blob/master/deliverabl
> es/_independent/rally.yaml#L45
> >> >>>
> >> >>> --
> >> >>> Regards,
> >> >>> Jeffrey Zhang
> >> >>> Blog: http://xcodest.me
> >> >>>
> >> >>>
> >> >>>
> >> >>> 
> __
> >> >>> OpenStack Development Mailing List (not for usage questions)
> >> >>> Unsubscribe:
> >> >>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >>>
> >> >>
> >> >>
> >> >>
> >> >> --
> >> >> Best regards,
> >> >> Andrey Kurilin.
> >> >>
> >> >>
> >> >> 
> __
> >> >> OpenStack Development Mailing List (not for usage questions)
> >> >> Unsubscribe:
> >> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> >> 

Re: [openstack-dev] [rally][ironic][release] How rally/ironic release version

2017-02-13 Thread Jeffrey Zhang
@Dmitry, thanks.

So ironic will use "openstack/releases" for release management? right?

On Mon, Feb 13, 2017 at 11:19 PM, Dmitry Tantsur 
wrote:

> We're still not done :( But we're very close! Please expect releases later
> today or tomorrow.
>
> On 02/13/2017 04:09 PM, Jeffrey Zhang wrote:
>
>> loop ironic tag in subject.
>>
>> i do not see any release info in Ocata cycle in ironic[0] and
>> ironic-inspector[1] project.
>>
>>
>> [0] https://github.com/openstack/releases/blob/master/deliverabl
>> es/ocata/ironic.yaml
>> [1] https://github.com/openstack/releases/blob/master/deliverabl
>> es/ocata/ironic-inspector.yaml
>>
>> On Mon, Feb 13, 2017 at 11:02 PM, Davanum Srinivas > > wrote:
>>
>> To add, please consider appointing a CPL to coordinate the update to
>> the releases/ repo:
>> https://wiki.openstack.org/wiki/CrossProjectLiaisons#Release
>> _management
>> > e_management>
>>
>> Thanks,
>> Dims
>>
>> On Mon, Feb 13, 2017 at 9:15 AM, Doug Hellmann > > wrote:
>> > Excerpts from Andrey Kurilin's message of 2017-02-13 12:14:31 +0200:
>> >> Hi Jeffrey,
>> >>
>> >> Rally team do not use "releases" repo at all, that is why
>> information at
>> >> [2] is outdated.
>> >> Our release workflow is making a proper tag and pushing it via
>> Gerrit. I
>> >> find it more convenient.
>> >
>> > Please propose an update to openstack/releases to either update the
>> > history information for Rally's releases or to delete the relevant
>> file
>> > entirely to avoid confusion.
>> >
>> > Doug
>> >
>> >>
>> >>
>> >>
>> >> On Mon, Feb 13, 2017 at 9:13 AM, Jeffrey Zhang <
>> zhang.lei@gmail.com
>> >
>> >> wrote:
>> >>
>> >> > Hey guys,
>> >> >
>> >> > I found rally already releases its 0.8.1 tag from[0][1]. But
>> >> > I found nothing in openstack/releases project[2]. How rally
>> >> > create tag?
>> >> >
>> >> > [0] http://tarballs.openstack.org/rally/
>> 
>> >> > [1] https://github.com/openstack/rally/releases
>> 
>> >> > [2] https://github.com/openstack/releases/blob/master/deliverabl
>> es/_
>> 
>> >> > independent/rally.yaml#L45
>> >> >
>> >> > --
>> >> > Regards,
>> >> > Jeffrey Zhang
>> >> > Blog: http://xcodest.me
>> >> >
>> >> > 
>> __
>> >> > OpenStack Development Mailing List (not for usage questions)
>> >> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > >
>> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac
>> k-dev
>> 
>> >> >
>> >> >
>> >>
>> >
>> > 
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.op
>> enstack.org?subject:unsubscribe
>> > >
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>>
>>
>>
>> --
>> Davanum Srinivas :: https://twitter.com/dims
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.op
>> enstack.org?subject:unsubscribe
>> > >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>>
>>
>>
>>
>> --
>> Regards,
>> Jeffrey Zhang
>> Blog: http://xcodest.me 
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Jeffrey Zhang
Blog: 

Re: [openstack-dev] [tc][glance][glare][all] glance/glare/artifacts/images at the PTG

2017-02-13 Thread Mikhail Fedosin
Hello!


Let me quickly describe my vision of the problem. I asked this question to
Brian last Friday, because it is evident that the projects have the
intersection in functionality. For this reason, I proposed to bring Glare
back and develop it as a new generation of Glance service. Perhaps such a
solution would be more correct from my point of view.

Moving away from Glance, let me remind you why we created Glare service.

Almost every project works with some binary data and must store it
somewhere, and almost always storage itself is not the part of the
project's mission. This issue has often been neglected. For this reason
there is no single recommended method for storing of binary data, which
would have a unified public api and hide all the things of the internal
storage infrastructure.

These questions were answered by Glare. First of all, the service allows to
use different storages for various types of artifacts - an operator can
assign the storage of large files, such as virtual machine images, to
Swift, and for relatively small ones, such as Heat templates, use a mysql
database.

Then, we have to admit that data tends to change, so we added a versioning
of artifacts and dependencies between them, that the user was convenient to
take the data of required version.

Often a "binary data" refers to more than one specific object, but a whole
lot of files. Therefore, we have implemented the ability to create
arbitrary nested folders per one artifact and store multiple files there.
And for sure users can receive any file with a single API request.

For validation and conversion of uploaded data Glare introduces the concept
of hooks for the operation. Thus the operator can extend the basic
functionality of the system and add integration with third-party systems
for each artifact type. For example, for Nokia we implemented integration
with custom TOSCA validator.

This is just a small overview of the key features of the service. For sure,
at the moment Glare is able to do all that Glance can do (except maybe a
sharing of artifacts), on the other hand we have added a number of new
features, that were requested by cloud operators for a long time.

Fyi, now we in Nokia are preparing additional API, which corresponds to the
ETSI VNF Packaging Specification [1]. So support of Image v2 API is not an
impossible task, and we may implement it as an alternative way of
interaction with "Images" artifact type. In this case Nova and other
services using Glance are absolutely indifferent to what service provides
Image API.

All tasks related to documentation and packaging are solvable. We’re
working on them together with Nokia, so I can assure you that the documents
and packages will be available this spring. The same story is for Ansible
and Puppet.

Now back again to our question. What I'd like is that Glare will receive
due recognition. Doing a project on the outskirts of OpenStack is not I
really want to. Therefore, it would be nice to develop Glare as a natural
evolution of Glance, associated with the requirements of operators and the
market in general. For Glance team is a good chance to try something new
and interesting, and of course gain new experience.

I am ready to discuss all these questions in this thread, and at PTG, as
long as necessary.

Best,

Mike

[1]
http://www.etsi.org/deliver/etsi_gs/NFV-IFA/001_099/011/02.01.01_60/gs_NFV-IFA011v020101p.pdf

On Fri, Feb 10, 2017 at 8:39 PM, Brian Rosmaita 
wrote:

> I want to give all interested parties a heads up that I have scheduled a
> session in the Macon room from 9:30-10:30 a.m. on Thursday morning
> (February 23).
>
> Here's what we need to discuss.  This is from my perspective as Glance
> PTL, so it's going to be Glance-centric.  This is a quick narrative
> description; please go to the session etherpad [0] to turn this into a
> specific set of discussion items.
>
> Glance is the OpenStack image cataloging and delivery service.  A few
> cycles ago (Juno?), someone noticed that maybe Glance could be
> generalized so that instead of storing image metadata and image data,
> Glance could store arbitrary digital "stuff" along with metadata
> describing the "stuff".  Some people (like me) thought that this was an
> obvious direction for Glance to take, but others (maybe wiser, cooler
> heads) thought that Glance needed to focus on image cataloging and
> delivery and make sure it did a good job at that.  Anyway, the Glance
> mission statement was changed to include artifacts, but the Glance
> community never embraced them 100%, and in Newton, Glare split off as
> its own project (which made sense to me, there was too much unclarity in
> Glance about how Glare fit in, and we were holding back development, and
> besides we needed to focus on images), and the Glance mission statement
> was re-amended specifically to exclude artifacts and focus on images and
> metadata definitions.
>
> OK, so the current situation is:
> - Glance "does" 

Re: [openstack-dev] [rally][ironic][release] How rally/ironic release version

2017-02-13 Thread Dmitry Tantsur
We're still not done :( But we're very close! Please expect releases later today 
or tomorrow.


On 02/13/2017 04:09 PM, Jeffrey Zhang wrote:

loop ironic tag in subject.

i do not see any release info in Ocata cycle in ironic[0] and
ironic-inspector[1] project.


[0] 
https://github.com/openstack/releases/blob/master/deliverables/ocata/ironic.yaml
[1] 
https://github.com/openstack/releases/blob/master/deliverables/ocata/ironic-inspector.yaml

On Mon, Feb 13, 2017 at 11:02 PM, Davanum Srinivas > wrote:

To add, please consider appointing a CPL to coordinate the update to
the releases/ repo:
https://wiki.openstack.org/wiki/CrossProjectLiaisons#Release_management


Thanks,
Dims

On Mon, Feb 13, 2017 at 9:15 AM, Doug Hellmann > wrote:
> Excerpts from Andrey Kurilin's message of 2017-02-13 12:14:31 +0200:
>> Hi Jeffrey,
>>
>> Rally team do not use "releases" repo at all, that is why information at
>> [2] is outdated.
>> Our release workflow is making a proper tag and pushing it via Gerrit. I
>> find it more convenient.
>
> Please propose an update to openstack/releases to either update the
> history information for Rally's releases or to delete the relevant file
> entirely to avoid confusion.
>
> Doug
>
>>
>>
>>
>> On Mon, Feb 13, 2017 at 9:13 AM, Jeffrey Zhang >
>> wrote:
>>
>> > Hey guys,
>> >
>> > I found rally already releases its 0.8.1 tag from[0][1]. But
>> > I found nothing in openstack/releases project[2]. How rally
>> > create tag?
>> >
>> > [0] http://tarballs.openstack.org/rally/

>> > [1] https://github.com/openstack/rally/releases

>> > [2] https://github.com/openstack/releases/blob/master/deliverables/_

>> > independent/rally.yaml#L45
>> >
>> > --
>> > Regards,
>> > Jeffrey Zhang
>> > Blog: http://xcodest.me
>> >
>> > 
__
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>> >
>> >
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--
Regards,
Jeffrey Zhang
Blog: http://xcodest.me 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally][ironic][release] How rally/ironic release version

2017-02-13 Thread Jeffrey Zhang
loop ironic tag in subject.

i do not see any release info in Ocata cycle in ironic[0] and
ironic-inspector[1] project.


[0]
https://github.com/openstack/releases/blob/master/deliverables/ocata/ironic.yaml
[1]
https://github.com/openstack/releases/blob/master/deliverables/ocata/ironic-inspector.yaml

On Mon, Feb 13, 2017 at 11:02 PM, Davanum Srinivas 
wrote:

> To add, please consider appointing a CPL to coordinate the update to
> the releases/ repo:
> https://wiki.openstack.org/wiki/CrossProjectLiaisons#Release_management
>
> Thanks,
> Dims
>
> On Mon, Feb 13, 2017 at 9:15 AM, Doug Hellmann 
> wrote:
> > Excerpts from Andrey Kurilin's message of 2017-02-13 12:14:31 +0200:
> >> Hi Jeffrey,
> >>
> >> Rally team do not use "releases" repo at all, that is why information at
> >> [2] is outdated.
> >> Our release workflow is making a proper tag and pushing it via Gerrit. I
> >> find it more convenient.
> >
> > Please propose an update to openstack/releases to either update the
> > history information for Rally's releases or to delete the relevant file
> > entirely to avoid confusion.
> >
> > Doug
> >
> >>
> >>
> >>
> >> On Mon, Feb 13, 2017 at 9:13 AM, Jeffrey Zhang  >
> >> wrote:
> >>
> >> > Hey guys,
> >> >
> >> > I found rally already releases its 0.8.1 tag from[0][1]. But
> >> > I found nothing in openstack/releases project[2]. How rally
> >> > create tag?
> >> >
> >> > [0] http://tarballs.openstack.org/rally/
> >> > [1] https://github.com/openstack/rally/releases
> >> > [2] https://github.com/openstack/releases/blob/master/deliverables/_
> >> > independent/rally.yaml#L45
> >> >
> >> > --
> >> > Regards,
> >> > Jeffrey Zhang
> >> > Blog: http://xcodest.me
> >> >
> >> > 
> __
> >> > OpenStack Development Mailing List (not for usage questions)
> >> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >
> >> >
> >>
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally][release] How rally release version

2017-02-13 Thread Davanum Srinivas
To add, please consider appointing a CPL to coordinate the update to
the releases/ repo:
https://wiki.openstack.org/wiki/CrossProjectLiaisons#Release_management

Thanks,
Dims

On Mon, Feb 13, 2017 at 9:15 AM, Doug Hellmann  wrote:
> Excerpts from Andrey Kurilin's message of 2017-02-13 12:14:31 +0200:
>> Hi Jeffrey,
>>
>> Rally team do not use "releases" repo at all, that is why information at
>> [2] is outdated.
>> Our release workflow is making a proper tag and pushing it via Gerrit. I
>> find it more convenient.
>
> Please propose an update to openstack/releases to either update the
> history information for Rally's releases or to delete the relevant file
> entirely to avoid confusion.
>
> Doug
>
>>
>>
>>
>> On Mon, Feb 13, 2017 at 9:13 AM, Jeffrey Zhang 
>> wrote:
>>
>> > Hey guys,
>> >
>> > I found rally already releases its 0.8.1 tag from[0][1]. But
>> > I found nothing in openstack/releases project[2]. How rally
>> > create tag?
>> >
>> > [0] http://tarballs.openstack.org/rally/
>> > [1] https://github.com/openstack/rally/releases
>> > [2] https://github.com/openstack/releases/blob/master/deliverables/_
>> > independent/rally.yaml#L45
>> >
>> > --
>> > Regards,
>> > Jeffrey Zhang
>> > Blog: http://xcodest.me
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [ptg] [goals] Pike WSGI Goal Planning

2017-02-13 Thread Emilien Macchi
I created https://etherpad.openstack.org/p/ptg-pike-wsgi so we can
start discussing on this goal.
Thierry confirmed to me that we would have a room on either Monday or
Tuesday. Please let us know in the etherpad if you have schedule
constraints.

Also, if you're PTL or / and interested by this Goal, please start
looking at the etherpad and maybe update the status for your project.

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [governance][tc][rally] Rally (Re: [rally][release] How rally release version)

2017-02-13 Thread Davanum Srinivas
Andrey,

It's a question, not a proposal.

If you don't do use releases repo, then rally information will be stale here:
https://releases.openstack.org/

Here's why we have a release team and things we do:
https://governance.openstack.org/tc/reference/projects/release-management.html
https://specs.openstack.org/openstack-infra/infra-specs/specs/centralize-release-tagging.html
http://docs.openstack.org/project-team-guide/release-management.html

Since Rally is following the independent model, it has a whole lot of leeway:
http://docs.openstack.org/project-team-guide/release-management.html#independent-release-model

However the absolute minimum requirement is that the releases repo
should be kept in sync. If that is too cumbersome, i don't know...

Thanks,
Dims



On Mon, Feb 13, 2017 at 9:15 AM, Andrey Kurilin  wrote:
> Hi Dims!
>
> I do not want to say except question "why you are proposing to do that?".
>
> I tried to find any rules about releasing at
> https://governance.openstack.org, but I could not.
> Please point me what rule I broke.
>
> On Mon, Feb 13, 2017 at 3:57 PM, Davanum Srinivas  wrote:
>>
>> Andrey, Rally team,
>>
>> Do you want Rally to be dropped from Governance and Releases repository?
>>
>> Thanks,
>> Dims
>>
>> On Mon, Feb 13, 2017 at 8:17 AM, Jeffrey Zhang 
>> wrote:
>> > @Andrey, thanks for the explanation.
>> >
>> > The issue is: how can i know which tag works with certain OpenStack
>> > branch?
>> > For example, which tag
>> > works with OpenStack Ocata release? There is no place recording this.
>> >
>> > I also found there are some other project do not use "project/releases"
>> > project. May they are using
>> > the same solution as rally.
>> >
>> > On Mon, Feb 13, 2017 at 6:14 PM, Andrey Kurilin 
>> > wrote:
>> >>
>> >> Hi Jeffrey,
>> >>
>> >> Rally team do not use "releases" repo at all, that is why information
>> >> at
>> >> [2] is outdated.
>> >> Our release workflow is making a proper tag and pushing it via Gerrit.
>> >> I
>> >> find it more convenient.
>> >>
>> >>
>> >>
>> >> On Mon, Feb 13, 2017 at 9:13 AM, Jeffrey Zhang
>> >> 
>> >> wrote:
>> >>>
>> >>> Hey guys,
>> >>>
>> >>> I found rally already releases its 0.8.1 tag from[0][1]. But
>> >>> I found nothing in openstack/releases project[2]. How rally
>> >>> create tag?
>> >>>
>> >>> [0] http://tarballs.openstack.org/rally/
>> >>> [1] https://github.com/openstack/rally/releases
>> >>> [2]
>> >>>
>> >>> https://github.com/openstack/releases/blob/master/deliverables/_independent/rally.yaml#L45
>> >>>
>> >>> --
>> >>> Regards,
>> >>> Jeffrey Zhang
>> >>> Blog: http://xcodest.me
>> >>>
>> >>>
>> >>>
>> >>> __
>> >>> OpenStack Development Mailing List (not for usage questions)
>> >>> Unsubscribe:
>> >>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>>
>> >>
>> >>
>> >>
>> >> --
>> >> Best regards,
>> >> Andrey Kurilin.
>> >>
>> >>
>> >> __
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >
>> >
>> >
>> > --
>> > Regards,
>> > Jeffrey Zhang
>> > Blog: http://xcodest.me
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>>
>>
>> --
>> Davanum Srinivas :: https://twitter.com/dims
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> Best regards,
> Andrey Kurilin.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [governance][tc][rally] Rally (Re: [rally][release] How rally release version)

2017-02-13 Thread Hayes, Graham
On 13/02/2017 14:02, Davanum Srinivas wrote:
> Andrey, Rally team,
>
> Do you want Rally to be dropped from Governance and Releases repository?
>
> Thanks,
> Dims
>

While I think the info should be removed from releases if there is no
useful data there, why the removal from governance?

Rally has a single deliverable, with no tags (and as such no expected
release schedule) - I am not sure why governance is mentioned here?

  - Graham

> On Mon, Feb 13, 2017 at 8:17 AM, Jeffrey Zhang  
> wrote:
>> @Andrey, thanks for the explanation.
>>
>> The issue is: how can i know which tag works with certain OpenStack branch?
>> For example, which tag
>> works with OpenStack Ocata release? There is no place recording this.
>>
>> I also found there are some other project do not use "project/releases"
>> project. May they are using
>> the same solution as rally.
>>
>> On Mon, Feb 13, 2017 at 6:14 PM, Andrey Kurilin 
>> wrote:
>>>
>>> Hi Jeffrey,
>>>
>>> Rally team do not use "releases" repo at all, that is why information at
>>> [2] is outdated.
>>> Our release workflow is making a proper tag and pushing it via Gerrit. I
>>> find it more convenient.
>>>
>>>
>>>
>>> On Mon, Feb 13, 2017 at 9:13 AM, Jeffrey Zhang 
>>> wrote:

 Hey guys,

 I found rally already releases its 0.8.1 tag from[0][1]. But
 I found nothing in openstack/releases project[2]. How rally
 create tag?

 [0] http://tarballs.openstack.org/rally/
 [1] https://github.com/openstack/rally/releases
 [2]
 https://github.com/openstack/releases/blob/master/deliverables/_independent/rally.yaml#L45

 --
 Regards,
 Jeffrey Zhang
 Blog: http://xcodest.me


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>>
>>>
>>> --
>>> Best regards,
>>> Andrey Kurilin.
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> --
>> Regards,
>> Jeffrey Zhang
>> Blog: http://xcodest.me
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] CI Squad Meeting Summary (week 6)

2017-02-13 Thread Attila Darazs
As always, if these topics interest you and you want to contribute to 
the discussion, feel free to join the next meeting:


Time: Thursdays, 15:30-16:30 UTC
Place: https://bluejeans.com/4113567798/

Full minutes: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting

We had only about half the usual attendance on our Thursday meeting as 
people had conflicts and other hindrances. I joined it from an airport 
lobby. We still managed to do some good work.


== Task prioritization ==

Our main focus was on prioritizing the remaining tasks for Quickstart 
upstream transition. There are a few high priority items which we put in 
the Next column on the RDO Infra board. See all the outstanding "Q to U" 
(Quickstart to Upstream) cards here[1].


Some of these are simple and quick low hanging fruits, a few are bigger 
chunks of work that need a good attention, like making sure that our 
multinode workflow can be reproduced over libvirt for easier debugging.


== Quickstart extra roles ==

We pulled in all useful roles into the quickstart-extras repo when we 
created it, and it seems it might be better if a few very specialized 
ones would live outside of it.


One example is Raul's validate-ha role, which we will split off to speed 
up development, as most cores are not involved in this and gates are not 
testing it.


== Update on transitioning to the new Quickstart jobs ==

We will use the job type field from the upstream jobs to figure out 
which quickstart job config we have to use for gate jobs (not the job name).


In addition to this, Gabrielle will tackle the issue of mixing the old 
and new jobs, and run them in parallel, letting us transition them one 
by one. Details in the trello card[2].


== Gating improvement ==

I was part of a meeting last week where we tried to identify problem 
areas for our testing and came to the conclusion that the ungated 
openstack common repo[3] is sometimes the cause for gating breaks.


We should start gating it to improve upstream quickstart job stability.

Best regards,
Attila

[1] 
https://trello.com/b/HhXlqdiu/rdo?menu=filter=label:%5BQ%20to%20U%5D

[2[ https://trello.com/c/dNTpzD1n
[3] https://buildlogs.centos.org/centos/7/cloud/x86_64/openstack-ocata/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [governance][tc][rally] Rally (Re: [rally][release] How rally release version)

2017-02-13 Thread Andrey Kurilin
Hi Dims!

I do not want to say except question "why you are proposing to do that?".

I tried to find any rules about releasing at
https://governance.openstack.org, but I could not.
Please point me what rule I broke.

On Mon, Feb 13, 2017 at 3:57 PM, Davanum Srinivas  wrote:

> Andrey, Rally team,
>
> Do you want Rally to be dropped from Governance and Releases repository?
>
> Thanks,
> Dims
>
> On Mon, Feb 13, 2017 at 8:17 AM, Jeffrey Zhang 
> wrote:
> > @Andrey, thanks for the explanation.
> >
> > The issue is: how can i know which tag works with certain OpenStack
> branch?
> > For example, which tag
> > works with OpenStack Ocata release? There is no place recording this.
> >
> > I also found there are some other project do not use "project/releases"
> > project. May they are using
> > the same solution as rally.
> >
> > On Mon, Feb 13, 2017 at 6:14 PM, Andrey Kurilin 
> > wrote:
> >>
> >> Hi Jeffrey,
> >>
> >> Rally team do not use "releases" repo at all, that is why information at
> >> [2] is outdated.
> >> Our release workflow is making a proper tag and pushing it via Gerrit. I
> >> find it more convenient.
> >>
> >>
> >>
> >> On Mon, Feb 13, 2017 at 9:13 AM, Jeffrey Zhang  >
> >> wrote:
> >>>
> >>> Hey guys,
> >>>
> >>> I found rally already releases its 0.8.1 tag from[0][1]. But
> >>> I found nothing in openstack/releases project[2]. How rally
> >>> create tag?
> >>>
> >>> [0] http://tarballs.openstack.org/rally/
> >>> [1] https://github.com/openstack/rally/releases
> >>> [2]
> >>> https://github.com/openstack/releases/blob/master/
> deliverables/_independent/rally.yaml#L45
> >>>
> >>> --
> >>> Regards,
> >>> Jeffrey Zhang
> >>> Blog: http://xcodest.me
> >>>
> >>>
> >>> 
> __
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe:
> >>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>
> >>
> >>
> >> --
> >> Best regards,
> >> Andrey Kurilin.
> >>
> >> 
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >
> > --
> > Regards,
> > Jeffrey Zhang
> > Blog: http://xcodest.me
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best regards,
Andrey Kurilin.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally][release] How rally release version

2017-02-13 Thread Doug Hellmann
Excerpts from Andrey Kurilin's message of 2017-02-13 12:14:31 +0200:
> Hi Jeffrey,
> 
> Rally team do not use "releases" repo at all, that is why information at
> [2] is outdated.
> Our release workflow is making a proper tag and pushing it via Gerrit. I
> find it more convenient.

Please propose an update to openstack/releases to either update the
history information for Rally's releases or to delete the relevant file
entirely to avoid confusion.

Doug

> 
> 
> 
> On Mon, Feb 13, 2017 at 9:13 AM, Jeffrey Zhang 
> wrote:
> 
> > Hey guys,
> >
> > I found rally already releases its 0.8.1 tag from[0][1]. But
> > I found nothing in openstack/releases project[2]. How rally
> > create tag?
> >
> > [0] http://tarballs.openstack.org/rally/
> > [1] https://github.com/openstack/rally/releases
> > [2] https://github.com/openstack/releases/blob/master/deliverables/_
> > independent/rally.yaml#L45
> >
> > --
> > Regards,
> > Jeffrey Zhang
> > Blog: http://xcodest.me
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel][kolla][kuryr][openstack-ansible][openstack-helm] OpenStack on containers leaveraging kuryr

2017-02-13 Thread Steven Dake (stdake)
Flavio,

Somehow the fuel project and openstack-ansible project got left off the 
taglines.  I’m not sure if the fuel peeps saw this thread, but I know the 
openstack-ansible peeps didn’t see the thread.  As a result, I’ve added those 
taglines.  Hopefully we can include the broader deployment tools community  in 
the 1 hour session since the room seats 50 peeps.

Regards
-steve


-Original Message-
From: Flavio Percoco 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Friday, February 10, 2017 at 7:24 AM
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [tripleo][kolla][openstack-helm][kuryr] OpenStack 
on containers leaveraging kuryr

On 09/02/17 09:57 +0100, Flavio Percoco wrote:
>Greetings,
>
>I was talking with Tony and he mentioned that he's recording a new demo for
>kuryr and, well, it'd be great to also use the containerized version of 
TripleO
>for the demo.
>
>His plan is to have this demo out by next week and that may be too tight 
for the
>containerized version of TripleO (it may be not, let's try). That said, I 
think
>it's still a good opportunity for us to sit down at the PTG and play with 
this a
>bit further.
>
>So, before we set a date and time for this, I wanted to extend the invite 
to
>other folks and see if there's some interest. It be great to also have 
folks
>from Kolla and openstack-helm joining.
>
>Looking forward to hearing ideas and hacking with y'all,
>Flavio

So, given the interest and my hope to group as much folks from other teams 
as
possible, what about we just schedule this for Wednesday at 09:00 am ?

I'm not sure what room we can crash yet but I'll figure it out soon and let
y'all know.

Any objections/observations?
Flavio

-- 
@flaper87
Flavio Percoco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][glance][glare][all] glance/glare/artifacts/images at the PTG

2017-02-13 Thread Flavio Percoco

On 13/02/17 14:37 +0100, Thierry Carrez wrote:

Brian Rosmaita wrote:

I want to give all interested parties a heads up that I have scheduled a
session in the Macon room from 9:30-10:30 a.m. on Thursday morning
(February 23).
[...]


Thanks for setting this meeting up. I'll be there!


Ditto! I'm looking forward to this discussion and I'll hold any comments back
until it happens next week.

Cheers,
Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally][release] How rally release version

2017-02-13 Thread Andrey Kurilin
On Mon, Feb 13, 2017 at 3:17 PM, Jeffrey Zhang 
wrote:

> @Andrey, thanks for the explanation.
>
> The issue is: how can i know which tag works with certain OpenStack
> branch? For example, which tag
> works with OpenStack Ocata release? There is no place recording this.
>
>
Rally project doesn't have alignment to any OpenStack release, since we do
not depend much on them. Latest release of Rally should work for all
OpenStack releases.


> I also found there are some other project do not use "project/releases"
> project. May they are using
> the same solution as rally.
>
> On Mon, Feb 13, 2017 at 6:14 PM, Andrey Kurilin 
> wrote:
>
>> Hi Jeffrey,
>>
>> Rally team do not use "releases" repo at all, that is why information at
>> [2] is outdated.
>> Our release workflow is making a proper tag and pushing it via Gerrit. I
>> find it more convenient.
>>
>>
>>
>> On Mon, Feb 13, 2017 at 9:13 AM, Jeffrey Zhang 
>> wrote:
>>
>>> Hey guys,
>>>
>>> I found rally already releases its 0.8.1 tag from[0][1]. But
>>> I found nothing in openstack/releases project[2]. How rally
>>> create tag?
>>>
>>> [0] http://tarballs.openstack.org/rally/
>>> [1] https://github.com/openstack/rally/releases
>>> [2] https://github.com/openstack/releases/blob/master/delive
>>> rables/_independent/rally.yaml#L45
>>>
>>> --
>>> Regards,
>>> Jeffrey Zhang
>>> Blog: http://xcodest.me
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Best regards,
>> Andrey Kurilin.
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best regards,
Andrey Kurilin.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [governance][tc][rally] Rally (Re: [rally][release] How rally release version)

2017-02-13 Thread Davanum Srinivas
Andrey, Rally team,

Do you want Rally to be dropped from Governance and Releases repository?

Thanks,
Dims

On Mon, Feb 13, 2017 at 8:17 AM, Jeffrey Zhang  wrote:
> @Andrey, thanks for the explanation.
>
> The issue is: how can i know which tag works with certain OpenStack branch?
> For example, which tag
> works with OpenStack Ocata release? There is no place recording this.
>
> I also found there are some other project do not use "project/releases"
> project. May they are using
> the same solution as rally.
>
> On Mon, Feb 13, 2017 at 6:14 PM, Andrey Kurilin 
> wrote:
>>
>> Hi Jeffrey,
>>
>> Rally team do not use "releases" repo at all, that is why information at
>> [2] is outdated.
>> Our release workflow is making a proper tag and pushing it via Gerrit. I
>> find it more convenient.
>>
>>
>>
>> On Mon, Feb 13, 2017 at 9:13 AM, Jeffrey Zhang 
>> wrote:
>>>
>>> Hey guys,
>>>
>>> I found rally already releases its 0.8.1 tag from[0][1]. But
>>> I found nothing in openstack/releases project[2]. How rally
>>> create tag?
>>>
>>> [0] http://tarballs.openstack.org/rally/
>>> [1] https://github.com/openstack/rally/releases
>>> [2]
>>> https://github.com/openstack/releases/blob/master/deliverables/_independent/rally.yaml#L45
>>>
>>> --
>>> Regards,
>>> Jeffrey Zhang
>>> Blog: http://xcodest.me
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> --
>> Best regards,
>> Andrey Kurilin.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][glance][glare][all] glance/glare/artifacts/images at the PTG

2017-02-13 Thread Ian Cordasco
-Original Message-
From: Clint Byrum 
Reply: OpenStack Development Mailing List (not for usage questions)

Date: February 12, 2017 at 20:09:04
To: openstack-dev 
Subject:  Re: [openstack-dev] [tc][glance][glare][all]
glance/glare/artifacts/images at the PTG

> Excerpts from Brian Rosmaita's message of 2017-02-10 12:39:11 -0500:
> > I want to give all interested parties a heads up that I have scheduled a
> > session in the Macon room from 9:30-10:30 a.m. on Thursday morning
> > (February 23).
> >
> > Here's what we need to discuss. This is from my perspective as Glance
> > PTL, so it's going to be Glance-centric. This is a quick narrative
> > description; please go to the session etherpad [0] to turn this into a
> > specific set of discussion items.
> >
> > Glance is the OpenStack image cataloging and delivery service. A few
> > cycles ago (Juno?), someone noticed that maybe Glance could be
> > generalized so that instead of storing image metadata and image data,
> > Glance could store arbitrary digital "stuff" along with metadata
> > describing the "stuff". Some people (like me) thought that this was an
> > obvious direction for Glance to take, but others (maybe wiser, cooler
> > heads) thought that Glance needed to focus on image cataloging and
> > delivery and make sure it did a good job at that. Anyway, the Glance
> > mission statement was changed to include artifacts, but the Glance
> > community never embraced them 100%, and in Newton, Glare split off as
> > its own project (which made sense to me, there was too much unclarity in
> > Glance about how Glare fit in, and we were holding back development, and
> > besides we needed to focus on images), and the Glance mission statement
> > was re-amended specifically to exclude artifacts and focus on images and
> > metadata definitions.
> >
> > OK, so the current situation is:
> > - Glance "does" image cataloging and delivery and metadefs, and that's
> > all it does.
> > - Glare is an artifacts service (cataloging and delivery) that can also
> > handle images.
> >
> > You can see that there's quite a bit of overlap. I gave you the history
> > earlier because we did try to work as a single project, but it did not
> > work out.
> >
> > So, now we are in 2017. The OpenStack development situation has been
> > fragile since the second half of 2016, with several big OpenStack
> > sponsors pulling way back on the amount of development resources being
> > contributed to the community. This has left Glare in the position where
> > it cannot qualify as a Bit Tent project, even though there is interest
> > in artifacts.
> >
> > Mike Fedosin, the PTL for Glare, has asked me about Glare becoming part
> > of the Glance project again. I will be completely honest, I am inclined
> > to say "no". I have enough problems just getting Glance stuff done (for
> > example, image import missed Ocata). But in addition to doing what's
> > right for Glance, I want to do what's right for OpenStack. And I look
> > at the overlap and think ...
> >
> > Well, what I think is that I don't want to go through the Juno-Newton
> > cycles of argument again. And we have to do what is right for our users.
> >
> > The point of this session is to discuss:
> > - What does the Glance community see as the future of Glance?
> > - What does the wider OpenStack community (TC) see as the future of Glance?
> > - Maybe, more importantly, what does the wider community see as the
> > obligations of Glance?
> > - Does Glare fit into this vision?
> > - What kind of community support is there for Glare?
> >
> > My reading of Glance history is that while some people were on board
> > with artifacts as the future of Glance, there was not a sufficient
> > critical mass of the Glance community that endorsed this direction and
> > that's why things unravelled in Newton. I don't want to see that happen
> > again. Further, I don't think the Glance community got the word out to
> > the broader OpenStack community about the artifacts project, and we got
> > a lot of pushback along the lines of "WTF? Glance needs to do images"
> > variety. And probably rightly so -- Glance needs to do images. My
> > point is that I don't want Glance to take Glare back unless it fits in
> > with what the community sees as the appropriate direction for Glance.
> > And I certainly don't want to take it back if the entire Glance
> > community is not on board.
> >
> > Anyway, that's what we're going to discuss. I've booked one of the
> > fishbowl rooms so we can get input from people beyond just the Glance
> > and Glare projects.
> >
>
> Does anybody else feel like this is deja vu of Neutron's inception?
>
> While I understand sometimes there are just incompatibilities in groups,
> I think we should probably try again. Unfortunately, it sounds like
> Glare already did the Neutron thing of starting from scratch and sort
> of overlapping in 

Re: [openstack-dev] [tc][glance][glare][all] glance/glare/artifacts/images at the PTG

2017-02-13 Thread Thierry Carrez
Brian Rosmaita wrote:
> I want to give all interested parties a heads up that I have scheduled a
> session in the Macon room from 9:30-10:30 a.m. on Thursday morning
> (February 23).
> [...]

Thanks for setting this meeting up. I'll be there!

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] A sneak peek to TripleO + Containers

2017-02-13 Thread Flavio Percoco

On 13/02/17 08:19 -0500, Emilien Macchi wrote:

On Mon, Feb 13, 2017 at 5:46 AM, Flavio Percoco  wrote:

Hello,

I've been playing with a self-installing container for the containerized
TripleO
undercloud and I thought I'd share some of the progress we've made so far.

This is definitely not at its final, ideal, state but I wanted to provide a
sneak peek to what is coming and what the updates/content of the
TripleO+Containers sessions will be next week at the PTG.

The image[0] shows the output of[1] after running the containerized
composable
undercloud deployment using a self-installing container[2]. Again, this is
not
stable and it still needs work. You can see in the screenshot that one of
the
neutron's agent failed and from the repo[3] that I'm using the scripts we've
been using for development instead of using oooq or something like that. One
interesting thing is that running[2] will leave you with an almost entirely
clean host. It still writes some stuff in `/var/lib` and `/etc/puppet` but
that
can be improved for sure.

Anyway, after all the disclaimers, I hope you'll be able to appreciate the
progress we've made. Dan Prince has been able to deploy an overcloud on top
of
the containerized undercloud already, which is great news.


Excellent work folks and thanks for the update!

Next step: experimental CI job? :-)


Yes, as soon as the required client patches land (reviews in progress) we'll
start working on this.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API WG PTG planning

2017-02-13 Thread Thierry Carrez
Ed Leafe wrote:
> On Feb 10, 2017, at 2:48 PM, Matt Riedemann  wrote:
> 
>> I assumed we'd take the opportunity to talk about capabilities [1] at the 
>> PTG but couldn't find any etherpad for the API WG on the wiki [2].
>>
>> Is the API WG getting together on Monday or Tuesday?
>>
>> [1] https://review.openstack.org/#/c/386555/
>> [2] https://wiki.openstack.org/wiki/PTG/Pike/Etherpads
> 
> We weren’t listed on the etherpad listing, so we didn’t know if we could take 
> a slot. So we asked the Architecture WG if we could share space with them. 
> The capabilities discussion is one of the ones we are planning on:
> 
> https://etherpad.openstack.org/p/ptg-architecture-workgroup

Yes, that was a bit of an oversight, difficult to fix one week before.

Just in case, I'll check today if we have a spare room on one of those
days, but if not it's probably safe to leverage the Arch WG room (on
Tuesday) or one of the reservable discussion rooms.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] A sneak peek to TripleO + Containers

2017-02-13 Thread Emilien Macchi
On Mon, Feb 13, 2017 at 5:46 AM, Flavio Percoco  wrote:
> Hello,
>
> I've been playing with a self-installing container for the containerized
> TripleO
> undercloud and I thought I'd share some of the progress we've made so far.
>
> This is definitely not at its final, ideal, state but I wanted to provide a
> sneak peek to what is coming and what the updates/content of the
> TripleO+Containers sessions will be next week at the PTG.
>
> The image[0] shows the output of[1] after running the containerized
> composable
> undercloud deployment using a self-installing container[2]. Again, this is
> not
> stable and it still needs work. You can see in the screenshot that one of
> the
> neutron's agent failed and from the repo[3] that I'm using the scripts we've
> been using for development instead of using oooq or something like that. One
> interesting thing is that running[2] will leave you with an almost entirely
> clean host. It still writes some stuff in `/var/lib` and `/etc/puppet` but
> that
> can be improved for sure.
>
> Anyway, after all the disclaimers, I hope you'll be able to appreciate the
> progress we've made. Dan Prince has been able to deploy an overcloud on top
> of
> the containerized undercloud already, which is great news.

Excellent work folks and thanks for the update!

Next step: experimental CI job? :-)

> [0] http://imgur.com/a/Mol28
> [1] docker ps -a --filter label=managed_by=docker-cmd
> [2] docker run --rm -v /var/run/docker.sock:/var/run/docker.sock -ti
> flaper87/tripleo-undercloud-init-container
> [3] https://github.com/flaper87/tripleo-undercloud-init-container
>
> Enjoy,
> Flavio
>
>
> --
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally][release] How rally release version

2017-02-13 Thread Jeffrey Zhang
@Andrey, thanks for the explanation.

The issue is: how can i know which tag works with certain OpenStack branch?
For example, which tag
works with OpenStack Ocata release? There is no place recording this.

I also found there are some other project do not use "project/releases"
project. May they are using
the same solution as rally.

On Mon, Feb 13, 2017 at 6:14 PM, Andrey Kurilin 
wrote:

> Hi Jeffrey,
>
> Rally team do not use "releases" repo at all, that is why information at
> [2] is outdated.
> Our release workflow is making a proper tag and pushing it via Gerrit. I
> find it more convenient.
>
>
>
> On Mon, Feb 13, 2017 at 9:13 AM, Jeffrey Zhang 
> wrote:
>
>> Hey guys,
>>
>> I found rally already releases its 0.8.1 tag from[0][1]. But
>> I found nothing in openstack/releases project[2]. How rally
>> create tag?
>>
>> [0] http://tarballs.openstack.org/rally/
>> [1] https://github.com/openstack/rally/releases
>> [2] https://github.com/openstack/releases/blob/master/
>> deliverables/_independent/rally.yaml#L45
>>
>> --
>> Regards,
>> Jeffrey Zhang
>> Blog: http://xcodest.me
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best regards,
> Andrey Kurilin.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] PTG sessions - agenda is drafted! (action required)

2017-02-13 Thread Emilien Macchi
If you plan to attend TripleO sessions in Atlanta, please review the
first draft of our schedule:
https://etherpad.openstack.org/p/tripleo-ptg-pike

Please let me know ASAP if we need to move a session or if there is
some important overlap.

I'll let chairs to prepare a blueprint / etherpad / some context so we
can have productive sessions. Also if the session needs cross-project
collaboration, I'll let the chair to find someone from the other team
to attend the meeting.

Again, this is a draft, any (quick) feedback is welcome so we can be
ready by end of the week.

Thanks everyone,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] nova-docker deleted from github

2017-02-13 Thread Davanum Srinivas
Debdipta,

github is merely a mirror. Over the last 2 years, multiple messages
have gone to -dev@ and -operators@ mailing list about nova-docker not
having a core team and that we needed one to keep it going. So we had
to shutdown the shop on it:
http://docs.openstack.org/infra/manual/drivers.html#retiring-a-project

Please feel free to use the older branches from git.openstack.org:
https://git.openstack.org/cgit/openstack/nova-docker/

Since nova-docker is under Apache license, please feel free to
maintain a fork on github if you choose (and i can redirect people to
your fork).

Thanks,
Dims

On Mon, Feb 13, 2017 at 1:46 AM, Debdipta Ghosh  wrote:
> Hello  Devanum,
>
> I just noticed you have deleted nova-docker from github. I understand a lot
> projects like magnum forked from nova-docker and there was no need to add
> new code here.
>
> Still I think, there was no cost involved if there some code hosted on
> github.
> Can you revert back your changes?
>
> Thanking you,
> Debdipta
>
> Think before you print  Go Green, Reduce, Reuse & Recycle
>
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] nova-docker deleted from github

2017-02-13 Thread Davanum Srinivas
Debdipta,

github is merely a mirror. Over the last 2 years, multiple messages
have gone to -dev@ and -operators@ mailing list about nova-docker not
having a core team and that we needed one to keep it going. So we had
to shutdown the shop on it:
http://docs.openstack.org/infra/manual/drivers.html#retiring-a-project

Please feel free to use the older branches from git.openstack.org:
https://git.openstack.org/cgit/openstack/nova-docker/

Since nova-docker is under Apache license, please feel free to
maintain a fork on github if you choose (and i can redirect people to
your fork).

Thanks,
Dims

On Mon, Feb 13, 2017 at 1:46 AM, Debdipta Ghosh  wrote:
> Hello  Devanum,
>
> I just noticed you have deleted nova-docker from github. I understand a lot
> projects like magnum forked from nova-docker and there was no need to add
> new code here.
>
> Still I think, there was no cost involved if there some code hosted on
> github.
> Can you revert back your changes?
>
> Thanking you,
> Debdipta
>
> Think before you print  Go Green, Reduce, Reuse & Recycle
>
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][tripleo][mistral][all]PTG: Cross-Project: OpenStack Orchestration Integration

2017-02-13 Thread Emilien Macchi
On Mon, Feb 13, 2017 at 4:48 AM, Rico Lin  wrote:

> Dear all
>
> PTG is approaching, we have few ideas around TripleO team ([1] and [2])
> about use case like using Mistral through Heat. It seems some great
> OpenStacker already start thing about how the Orchestration services (Heat,
> Mistral, and some other projects) could use together for a better developer
> or operator experiences. First, of curse,
> we will arrange a fishbowl design session on Wednesday morning.
> Let's settle with 10:00 am to 10:50 am at Macon (on level2) for now.
> Could teams kindly help to make sure they can attend this cross project
> session or need it reschedule?
>

Can we reschedule it? It seems like the only slot where we have sessions
organized is on Wednesday morning, for our container work:
https://etherpad.openstack.org/p/tripleo-ptg-pike

Wednesday 9:00 Cross-Teams talk about containers and networking
Wednesday 10:00: TripleO Containers status update and path forward

So I suggest Wednesday afternoon or Thursday or Friday morning. At your
convenience.


> Hopefully, above three team's schedule not conflict with this schedule.
> If the schedule is a perfect fit for all teams and you feel like this is
> part of your concerns, then we shall see you all there:)
>
>
> [1] https://review.openstack.org/#/c/267770/
> [2] http://lists.openstack.org/pipermail/openstack-dev/2017-
> January/110624.html
>
> --
> May The Force of OpenStack Be With You,
>
> *Rico Lin*irc: ricolin
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Designate] In what sense is it multi-tenant?

2017-02-13 Thread Hayes, Graham
On 10/02/17 21:51, Mike Spreitzer wrote:
> In what sense is Designate multi-tenant?  Can it be programmed to give
> different views to different DNS clients?  (If so, how?)
> 
> Thanks,
> Mike

It is multi-tenant as it allows multiple users to use the same
DNS infra, while keeping control of DNS zones limited on a project
by project basis. (think of services like DynECT, Route53, SimpleDNS
etc)

We also have support for "pools" of DNS servers, which could be used to
show different views to different networks, but they are not a per
project thing - they are a global resource (right now).

Thanks,

- Graham

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] A sneak peek to TripleO + Containers

2017-02-13 Thread Flavio Percoco

On 13/02/17 11:46 +0100, Flavio Percoco wrote:

Hello,

I've been playing with a self-installing container for the containerized TripleO
undercloud and I thought I'd share some of the progress we've made so far.

This is definitely not at its final, ideal, state but I wanted to provide a
sneak peek to what is coming and what the updates/content of the
TripleO+Containers sessions will be next week at the PTG.

The image[0] shows the output of[1] after running the containerized composable
undercloud deployment using a self-installing container[2]. Again, this is not
stable and it still needs work. You can see in the screenshot that one of the
neutron's agent failed and from the repo[3] that I'm using the scripts we've
been using for development instead of using oooq or something like that. One
interesting thing is that running[2] will leave you with an almost entirely
clean host. It still writes some stuff in `/var/lib` and `/etc/puppet` but that
can be improved for sure.

Anyway, after all the disclaimers, I hope you'll be able to appreciate the
progress we've made. Dan Prince has been able to deploy an overcloud on top of
the containerized undercloud already, which is great news.

[0] http://imgur.com/a/Mol28
[1] docker ps -a --filter label=managed_by=docker-cmd
[2] docker run --rm -v /var/run/docker.sock:/var/run/docker.sock -ti 
flaper87/tripleo-undercloud-init-container
[3] https://github.com/flaper87/tripleo-undercloud-init-container


I should have mentioned that this also requires installing openvswitch and NTP
in the host. Again, things we're working on and that can be fixed but
#fulltransparency

Flavio


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] A sneak peek to TripleO + Containers

2017-02-13 Thread Flavio Percoco

Hello,

I've been playing with a self-installing container for the containerized TripleO
undercloud and I thought I'd share some of the progress we've made so far.

This is definitely not at its final, ideal, state but I wanted to provide a
sneak peek to what is coming and what the updates/content of the
TripleO+Containers sessions will be next week at the PTG.

The image[0] shows the output of[1] after running the containerized composable
undercloud deployment using a self-installing container[2]. Again, this is not
stable and it still needs work. You can see in the screenshot that one of the
neutron's agent failed and from the repo[3] that I'm using the scripts we've
been using for development instead of using oooq or something like that. One
interesting thing is that running[2] will leave you with an almost entirely
clean host. It still writes some stuff in `/var/lib` and `/etc/puppet` but that
can be improved for sure.

Anyway, after all the disclaimers, I hope you'll be able to appreciate the
progress we've made. Dan Prince has been able to deploy an overcloud on top of
the containerized undercloud already, which is great news.

[0] http://imgur.com/a/Mol28
[1] docker ps -a --filter label=managed_by=docker-cmd
[2] docker run --rm -v /var/run/docker.sock:/var/run/docker.sock -ti 
flaper87/tripleo-undercloud-init-container
[3] https://github.com/flaper87/tripleo-undercloud-init-container

Enjoy,
Flavio


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat][magnum][murano][sahara][tripleo][mistral][all]PTG: Cross-Project:Orchestration feedback and announcement

2017-02-13 Thread Rico Lin
Dear all

We would like to have a Cross Project fishbowl session about Orchestration
feedback and announcement.
We would like to help on any improvement that will potentially help other
projects.
That's why we need your feedback.
Heat has landed some cool improvement like reduce 60% of memory usage in
the last cycle, stable convergence engine, etc. Therefore, we would like to
check with teams if those nice features can be integrated and enabled
within their project. If not, which goal we still required to make it
happen?


*Let's schedule this session in Macon(on level2) at 11:00 am - 12:00 pm on
Wednesday Morning for now.*
Could teams kindly help to make sure they can attend this cross project
session or need it reschedule?
Hopefully, all schedule for teams does not conflict with this schedule.
If the schedule is a perfect fit for all teams and you feel like this is
part of your concerns, then we hope to see you all there:)

-- 
May The Force of OpenStack Be With You,

*Rico Lin*irc: ricolin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [monasca] Ideas to work on

2017-02-13 Thread witold.be...@est.fujitsu.com
Hi,

Here the URL to monasca-log-api documentation [1].

Cheers
Witek


[1] https://github.com/openstack/monasca-log-api/tree/master/documentation



I would like to look at logs publishing as well. But unfortunately I did not 
find the monasca-log-api doc, which is supposed to be at 
https://github.com/openstack/monasca-log-api/tree/master/docs . I don't know 
how this log-api works now. Please share me a copy of the doc if you have one.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally][release] How rally release version

2017-02-13 Thread Andrey Kurilin
Hi Jeffrey,

Rally team do not use "releases" repo at all, that is why information at
[2] is outdated.
Our release workflow is making a proper tag and pushing it via Gerrit. I
find it more convenient.



On Mon, Feb 13, 2017 at 9:13 AM, Jeffrey Zhang 
wrote:

> Hey guys,
>
> I found rally already releases its 0.8.1 tag from[0][1]. But
> I found nothing in openstack/releases project[2]. How rally
> create tag?
>
> [0] http://tarballs.openstack.org/rally/
> [1] https://github.com/openstack/rally/releases
> [2] https://github.com/openstack/releases/blob/master/deliverables/_
> independent/rally.yaml#L45
>
> --
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best regards,
Andrey Kurilin.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >