[openstack-dev] [Neutron] Canceling next upstream meeting

2017-05-08 Thread Jakub Libosvar
Hi folks,

due to OpenStack Summit I'm canceling the next Tue May 9th upstream meeting.

Cheers,
Jakub

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-08 Thread Matt Riedemann

On 5/8/2017 1:24 PM, Octave J. Orgeron wrote:

Now for Oracle, we definitely need more 3rd party CI to make it easier
to test our drivers, components, and patches against so that it's easier
for the community to validate things. However, it takes time, resources,
and money to make that happen. Hopefully that will get sorted out over
time. But even if we make all of the investments in setting that up, we
still need the upstream teams to come to the table and not shun us away
just because we are Oracle :)


I'd recommend talking with Drew Thorstensen and the IBM PowerVM team. 
They persistently worked with the nova team over a few cycles to finally 
get to the point where we agreed to bring their driver in tree, but not 
before we knew they already had open-sourced their nova driver code (the 
driver code, not the hypervisor code) and were running 3rd party CI 
against it successfully. But they are really the example now for anyone 
trying to get new virt drivers into Nova.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-08 Thread Matt Riedemann

On 5/8/2017 1:10 PM, Octave J. Orgeron wrote:

I do agree that scalability and high-availability are definitely issues
for OpenStack when you dig deeper into the sub-components. There is a
lot of re-inventing of the wheel when you look at how distributed
services are implemented inside of OpenStack and deficiencies. For some
services you have a scheduler that can scale-out, but the conductor or
worker process doesn't. A good example is cinder, where cinder-volume
doesn't scale-out in a distributed manner and doesn't have a good
mechanism for recovering when an instance fails. All across the services
you see different methods for coordinating requests and tasks such as
rabbitmq, redis, memcached, tooz, mysql, etc. So for an operator, you
have to sift through those choices and configure the per-requisite
infrastructure. This is a good example of a problem that should be
solved with a single architecturally sound solution that all services
can standardize on.


There was an architecture workgroup specifically designed to understand 
past architectural decisions in OpenStack, and what the differences are 
in the projects, and how to address some of those issues, but from lack 
of participation the group dissolved shortly after the Barcelona summit. 
This is, again, another example of if you want to make these kinds of 
massive changes, it's going to take massive involvement and leadership.




The problem in a lot of those cases comes down to development being
detached from the actual use cases customers and operators are going to
use in the real world. Having a distributed control plane with multiple
instances of the api, scheduler, coordinator, and other processes is
typically not testable without a larger hardware setup. When you get to
large scale deployments, you need an active/active setup for the control
plane. It's definitely not something you could develop for or test
against on a single laptop with devstack. Especially, if you want to use
more than a handful of the OpenStack services.


I think we can all agree with this. Developers don't have a lab with 
1000 nodes lying around to hack on. There was OSIC but that's gone. I've 
been requesting help in Nova from companies to do scale testing and help 
us out with knowing what the major issues are, and report those back in 
a form so we can work on those issues. People will report there are 
issues, but not do the profiling, or at least not report the results of 
profiling, upstream to help us out. So again, this is really up to 
companies that have the resources to do this kind of scale testing and 
report back and help fix the issues upstream in the community. That 
doesn't require OpenStack 2.0.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Come Pick up Mascot Stickers

2017-05-08 Thread Kendall Nelson
Please disregard that last email! Typing in the wrong window fail. I will
let you all know details about more sticker pickup soon!

-Kendall Nelson

On Mon, May 8, 2017 at 6:54 PM Kendall Nelson  wrote:

> Hello PTLs!
>
> If you didn't get your stickers today, they will still be available in the
> foundation staff loungejung
>
> On Mon, May 8, 2017 at 1:12 PM Kendall Nelson 
> wrote:
>
>> Hello PTLs!
>>
>> The first pickup time will be from 2 to 4 pm today in the Foundation
>> Lounge Hynes 2nd Floor outside 206 from Ildiko Vancsa.
>>
>> From there you are free to distribute them as you like :)
>>
>> -Kendall Nelson
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Come Pick up Mascot Stickers

2017-05-08 Thread Kendall Nelson
Hello PTLs!

If you didn't get your stickers today, they will still be available in the
foundation staff loungejung

On Mon, May 8, 2017 at 1:12 PM Kendall Nelson  wrote:

> Hello PTLs!
>
> The first pickup time will be from 2 to 4 pm today in the Foundation
> Lounge Hynes 2nd Floor outside 206 from Ildiko Vancsa.
>
> From there you are free to distribute them as you like :)
>
> -Kendall Nelson
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] networking-sfc meetings cancelled this week

2017-05-08 Thread Henry Fourie
All,
  networking-sfc meetings will resume on May 18.
- Louis
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - are you attending the Boston summit?

2017-05-08 Thread Miguel Lavalle
Dear Neutrinos,

I am working with Legal Sea Foods on a reservation for 30 people, Wednesday
at 7pm. I am assuming the 30 people who registered in the etherpad will
attend (https://etherpad.openstack.org/p/neutron-boston-summit-attendees).
If your name is in the etherpad and you DON'T plan to attend, please let me
know.

Legal Sea Foods has several locations close to the convention center. I
will send an update with the slected location as soon as I can finalize the
the details with them. Please keep an eye on your inbox

Cheers

Miguel

On Mon, May 8, 2017 at 7:57 AM, Kevin Benton  wrote:

> Let's plan for a social Wednesday night. I'll update this with a location
> once we find a place.
>
> On May 8, 2017 08:50, "MCCASLAND, TREVOR"  wrote:
>
>> Looking forward to it! RSVP? +1
>>
>>
>>
>> *From:* Sukhdev Kapur [mailto:sukhdevka...@gmail.com]
>> *Sent:* Saturday, May 06, 2017 12:31 AM
>> *To:* OpenStack Development Mailing List (not for usage questions) <
>> openstack-dev@lists.openstack.org>
>> *Subject:* Re: [openstack-dev] [neutron] - are you attending the Boston
>> summit?
>>
>>
>>
>> Hey Neutron Folks,
>>
>>
>>
>> Following our past tradition, we should have Neutron dinner while we are
>> all in Boston.
>>
>> Miguel has few places in mind. I would propose that we nominate him as
>> the dinner organizer lieutenant.
>>
>>
>>
>> Miguel, I hope you will take us to some cool place.
>>
>>
>>
>> Thanks
>>
>> -Sukhdev
>>
>>
>>
>>
>>
>> On Thu, Apr 20, 2017 at 4:31 PM, Kevin Benton  wrote:
>>
>> Hi,
>>
>>
>>
>> If you are a Neutron developer attending the Boston summit, please add
>> your name to the etherpad here so we can plan a Neutron social and easily
>> coordinate in person meetings: https://etherpad.ope
>> nstack.org/p/neutron-boston-summit-attendees
>> 
>>
>>
>>
>> Cheers,
>>
>> Kevin Benton
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-08 Thread Doug Hellmann
Excerpts from Jeremy Stanley's message of 2017-05-08 20:18:35 +:
> On 2017-05-08 11:24:00 -0600 (-0600), Octave J. Orgeron wrote:
> [...]
> > none of those products that those drivers are written for are open
> > sourced and they meet less resistance to committing code upstream.
> > So I have to call BS on your comment that the community can't work
> > with us because Solaris isn't open sourced.
> 
> Totally not what I said.
> 
> My point was that constantly reminding management of one of the
> primary sources of friction might help. Working with free software
> communities becomes easier when you don't outright reject their
> values by deciding to cancel your open version of the thing you want
> them to help you support.
> 
> > Now for Oracle, we definitely need more 3rd party CI to make it
> > easier to test our drivers, components, and patches against so
> > that it's easier for the community to validate things. However, it
> > takes time, resources, and money to make that happen. Hopefully
> > that will get sorted out over time.
> 
> And _this_ was entirely the rest of my point, yes. Your needs seem
> quite similar to to those of VMWare, XenServer and HyperV, so I
> fully expect Nova's core reviewers will hold Solaris support patches
> to the same validation requirements. We can't run Solaris in our
> upstream testing for the same reasons we can't run those other
> examples (they're not free software), so the onus is on the vendor
> to satisfy this need for continuous testing and reporting instead.
> 
> > But even if we make all of the investments in setting that up, we
> > still need the upstream teams to come to the table and not shun us
> > away just because we are Oracle :)
> [...]
> 
> Smiley or no, the assertion that our quality assurance choices are
> based on personal preference for some particular company over
> another is still mildly offensive.

Yes, let's keep in mind that the answer to these questions about
stable branches are and have been the same no matter who asked them.
Early in this thread we pointed out that this topic comes up
regularly, from different sources, and the answer remains the same:
Start by contributing to the existing stable maintenance, and either
improve the processes and tools to make it easier to do more and/or
recruit more people to spread the work around.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [os-upstream-institute] Post Mortem Training Meeting at the Summit

2017-05-08 Thread Kendall Nelson
Hello Everyone,

If you are interested in hearing how the Upstream Institute training went
this last weekend come join us! We will be discussing what we though went
well, what we need to work on for next time, next steps etc.

There is a reserved hacking room slot in Hynes MR111 from 3-3:50 on
Thursday May 4th (Ildiko and I will need to leave 10 min early or so
because we have a session to present on the training).

Here is a place to start collecting your thoughts ahead of time[1].

Hope to see you there!

Kendall Nelson(diablo_rojo)

[1] https://etherpad.openstack.org/p/BOS_OUI_Post_Mortem
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [StoryBoard] No StoryBoard Meeting this week

2017-05-08 Thread Kendall Nelson
Instead of a meeting, come to our talk about the migration to StoryBoard[1]!

There will be no meeting at 19:00 UTC on Wednesday May 10th. If you have
any questions or anything pressing, we will be around in the #storyboard
channel.  The next meeting will be Wednesday May 17th.

-Kendall Nelson

[1]
https://www.openstack.org/summit/boston-2017/summit-schedule/global-search?t=storyboard
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [freezer] Core team updates

2017-05-08 Thread Vitaliy Nogin
Hi,

As there has not been produced any activity related to freezer project from Tim 
and Deklan during the last year +1 for removing them from core list from my 
side.

Regards,
Vitaliy

> 9 мая 2017 г., в 00:03, Saad Zaher  написал(а):
> 
> Hello everyone,
> 
> I would like to propose some core member updates to the Freezer core team. I 
> would like to remove the following users from core as they became inactive 
> members now
> 
> Tim Buckley   
> Deklan Dieterly
> 
> Please, If you agree to these changes vote +1 otherwise explain your opinion.
> 
> If there is no objection, I plan to remove them before the end of this week.
> 
> 
> ---
> Best Regards,
> Saad!
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [freezer] Core team updates

2017-05-08 Thread Saad Zaher
Hello everyone,

I would like to propose some core member updates to the Freezer core team.
I would like to remove the following users from core as they became
inactive members now


   - Tim Buckley
   - Deklan Dieterly


Please, If you agree to these changes vote +1 otherwise explain your
opinion.

If there is no objection, I plan to remove them before the end of this week.


---
Best Regards,
Saad!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [freezer] No Freezer meeting this week

2017-05-08 Thread Saad Zaher
Hello Everyone,

As most of the people are in the summit we're not going to have a freezer
meeting this week. Freezer meetings will be continued next week on the
usual time slot (Thursday @ 2 O'clock GMT) in #openstack-meeting-alt


--
Best Regards,
Saad!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-08 Thread Jeremy Stanley
On 2017-05-08 11:24:00 -0600 (-0600), Octave J. Orgeron wrote:
[...]
> none of those products that those drivers are written for are open
> sourced and they meet less resistance to committing code upstream.
> So I have to call BS on your comment that the community can't work
> with us because Solaris isn't open sourced.

Totally not what I said.

My point was that constantly reminding management of one of the
primary sources of friction might help. Working with free software
communities becomes easier when you don't outright reject their
values by deciding to cancel your open version of the thing you want
them to help you support.

> Now for Oracle, we definitely need more 3rd party CI to make it
> easier to test our drivers, components, and patches against so
> that it's easier for the community to validate things. However, it
> takes time, resources, and money to make that happen. Hopefully
> that will get sorted out over time.

And _this_ was entirely the rest of my point, yes. Your needs seem
quite similar to to those of VMWare, XenServer and HyperV, so I
fully expect Nova's core reviewers will hold Solaris support patches
to the same validation requirements. We can't run Solaris in our
upstream testing for the same reasons we can't run those other
examples (they're not free software), so the onus is on the vendor
to satisfy this need for continuous testing and reporting instead.

> But even if we make all of the investments in setting that up, we
> still need the upstream teams to come to the table and not shun us
> away just because we are Oracle :)
[...]

Smiley or no, the assertion that our quality assurance choices are
based on personal preference for some particular company over
another is still mildly offensive.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Gluon] IRC Meeting cancelled on May 10, 2017 and Re-convene on May 17, 2017

2017-05-08 Thread HU, BIN
Hello folks,



We are all in Boston this week. So we will cancel the IRC meeting this week May 
10 and will re-convene next week May 17.



Thanks



Bin



[1] https://wiki.openstack.org/wiki/Gluon

[2] https://wiki.openstack.org/wiki/Meetings/Gluon



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Project On-Boarding Info Collection

2017-05-08 Thread Kendall Nelson
Hello!

If you are running a project onboarding session and have etherpads/slides/
etc you are using to educate new contributors  please send them to me! I am
collecting all the resources you are sharing into a single place for people
that weren't able to attend sessions.

Thanks!

Kendall (diablo_rojo)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kolla-ansible] Proposing Bertrand Lallau for kolla and kolla-ansible core

2017-05-08 Thread Dave Walker
+1, some great contributions!

Thanks

On 8 May 2017 at 19:11, Kwasniewska, Alicja 
wrote:

> +1 Congrats☺
>
> Regards,
> Alicja Kwasniewska
>
>
>
> *From: *"Vikram Hosakote (vhosakot)" 
> *Reply-To: *"OpenStack Development Mailing List (not for usage
> questions)" 
> *Date: *Monday, May 8, 2017 at 6:54 AM
> *To: *"OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> *Subject: *Re: [openstack-dev] [kolla][kolla-ansible] Proposing Bertrand
> Lallau for kolla and kolla-ansible core
>
>
>
> +1  Great job Bertrand!
>
>
>
> Regards,
>
> Vikram Hosakote
>
> IRC:  vhosakot
>
>
>
> *From: *Michał Jastrzębski 
> *Reply-To: *"OpenStack Development Mailing List (not for usage
> questions)" 
> *Date: *Tuesday, May 02, 2017 at 10:13 PM
> *To: *"OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> *Subject: *[openstack-dev] [kolla][kolla-ansible] Proposing Bertrand
> Lallau for kolla and kolla-ansible core
>
>
>
> Hello,
>
>
>
> It's my pleasure to start another core reviewer vote. Today it's
>
> Bertrand (blallau). Consider this mail my +1 vote. Members of
>
> kolla-ansible and kolla core team, please cast your votes:) Voting
>
> will be open for 2 weeks (until 16th of May).
>
>
>
> I also wanted to say that Bertrand went through our core mentorship
>
> program (if only for few weeks because he did awesome job before too)
>
> :)
>
>
>
> Thank you,
>
> Michal
>
> __
>
> OpenStack Development Mailing List (not for usage questions)
>
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [MassivelyDistributed] Fog / Edge / Massively Distributed Cloud Sessions during the summit

2017-05-08 Thread lebre . adrien
Dear Edgar, 

As indicated into the WG chairs' session pad [1], the WG was previously 
entitled ``Massively Distributed Cloud''. 
The description appears on the WG wiki page [2] (and I sent an email to the 
user ML a few months ago to ask for the official creation [3]). 

After exchanging with the OpenStack foundation folks recently, they suggested 
us to rename the Massively Distributed Clouds WG as the Fog/Edge/Massively 
Distributed Clouds WG. 
The old wiki page [4] refers the new one [5]. 

Please let me know whether I miss something (I can update [2] to reflect the 
new name)
ad_rien_

[1] 
https://etherpad.openstack.org/p/BOS-forum-wg-chairs-collaboration-and-WG-overviews
[2] 
https://wiki.openstack.org/wiki/Governance/Foundation/UserCommittee#Working_Groups_and_Teams
 
[3] 
http://lists.openstack.org/pipermail/user-committee/2016-September/001232.html
[4] https://wiki.openstack.org/wiki/Massively_Distributed_Clouds
[5] https://wiki.openstack.org/wiki/Fog_Edge_Massively_Distributed_Clouds


- Mail original -
> De: "Edgar Magana" 
> À: "OpenStack Development Mailing List (not for usage questions)" 
> , "OpenStack
> Operators" , 
> openst...@lists.openstack.org
> Cc: "Shilla Saebi" 
> Envoyé: Lundi 8 Mai 2017 18:25:12
> Objet: Re: [openstack-dev] [MassivelyDistributed] Fog / Edge / Massively 
> Distributed Cloud Sessions during the summit
> 
> Hello,
> 
> In behalf of the User Community I would like to understand if this is
> considering officially to request to create the Working Group. I
> could have missed another email request the inclusion but if not, I
> would like to discuss the goals and objective from the WG. The User
> Committee will be glad to helping you out if anything needed.
> 
> I am cc the rest of the UC members.
> 
> Thanks,
> 
> Edgar
> 
> On 5/5/17, 1:16 PM, "lebre.adr...@free.fr" 
> wrote:
> 
> Dear all,
> 
> A brief email to inform you about our schedule next week in
> Boston.
> 
> In addition to interesting presentations that will deal with
> Fog/Edge/Massively Distributed Clouds challenges [1], I would
> like to highlight two important sessions:
> 
> * A new Birds of a Feather session ``OpenStack on the Edge'' is
> now scheduled on Tuesday afternoon [2].
> This will be the primary call to action covered by Jonathan Bryce
> during Monday's keynote about Edge Computing.
> After introducing the goal of the WG, I will give the floor to
> participants to share their use-case (3/4 min for each
> presentation)
> The Foundation has personally invited four large users that are
> planning for fog/edge computing.
> This will guide the WG for the future and hopefully get more
> contributors involved.
> Moreover, many of the Foundation staff already planned to attend
> and talk about the in-planning-phase OpenDev event and get
> input.
> The etherpad for this session is available at [3].
> 
> * Our regular face-to-face meeting for current and new members to
> discuss next cycle plans is still scheduled on Wednesday
> afternoon [5]
> The etherpad for this session is available at [5]
> 
> I encourage all of you to attend both sessions.
> See you in Boston and have a safe trip
> ad_rien_
> 
> [1]
> 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.openstack.org_summit_boston-2D2017_summit-2Dschedule_global-2Dsearch-3Ft-3Dedge=DwIGaQ=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ=OHBOWu_IZimaHf_g66AuXvYldFF594SpFk35-sbkZ6g=qez9zHbrE7PfeS8vjgS2nql6xeYGg5heiAnBNMCy5os=
> [2]
> 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.openstack.org_summit_boston-2D2017_summit-2Dschedule_events_18988_openstack-2Don-2Dthe-2Dedge-2Dfogedgemassively-2Ddistributed-2Dclouds-2Dbirds-2Dof-2Da-2Dfeather=DwIGaQ=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ=OHBOWu_IZimaHf_g66AuXvYldFF594SpFk35-sbkZ6g=fnH2yZ09stEM5L-sOWuq79x_fySLwPOZu308ue7TmCU=
> [3]
> 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__etherpad.openstack.org_p_BOS-2DUC-2Dbrainstorming-2DMassivelyDistributed-2DFog-2DEdge=DwIGaQ=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ=OHBOWu_IZimaHf_g66AuXvYldFF594SpFk35-sbkZ6g=vJ_TxPslQDoJoV7e_KBxSwyBxhkTAiHbDtVhqsLrZIU=
> [4]
> 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.openstack.org_summit_boston-2D2017_summit-2Dschedule_events_18671_fogedgemassively-2Ddistributed-2Dclouds-2Dworking-2Dgroup=DwIGaQ=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ=OHBOWu_IZimaHf_g66AuXvYldFF594SpFk35-sbkZ6g=TgruXx4CSs5xxZKdZF151lxc_EpYduii1UzYfZ7Vvbk=
> [5]
> 
> 

Re: [openstack-dev] [kolla][kolla-ansible] Proposing Bertrand Lallau for kolla and kolla-ansible core

2017-05-08 Thread Kwasniewska, Alicja
+1 Congrats☺

Regards,
Alicja Kwasniewska

From: "Vikram Hosakote (vhosakot)" 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Monday, May 8, 2017 at 6:54 AM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [kolla][kolla-ansible] Proposing Bertrand Lallau 
for kolla and kolla-ansible core

+1  Great job Bertrand!

Regards,
Vikram Hosakote
IRC:  vhosakot

From: Michał Jastrzębski >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Tuesday, May 02, 2017 at 10:13 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [kolla][kolla-ansible] Proposing Bertrand Lallau for 
kolla and kolla-ansible core

Hello,

It's my pleasure to start another core reviewer vote. Today it's
Bertrand (blallau). Consider this mail my +1 vote. Members of
kolla-ansible and kolla core team, please cast your votes:) Voting
will be open for 2 weeks (until 16th of May).

I also wanted to say that Bertrand went through our core mentorship
program (if only for few weeks because he did awesome job before too)
:)

Thank you,
Michal
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Discussions for DPDK support in OpenStack

2017-05-08 Thread TETSURO NAKAMURA

Thank you for information !

So, you mean the situation is not changed from the reference thread[1] 
in qemu-devel ML 3 years ago, and the difficulties of ivshmem you 
mentioned is described in this thread. Am I right?


[1] "[Qemu-devel] Why I advise against using ivshmem"
https://lists.linuxfoundation.org/pipermail/virtualization/2014-June/026767.html

On 2017/05/08 9:09, Daniel P. Berrange wrote:

On Fri, Apr 28, 2017 at 09:38:38AM +0100, sfinu...@redhat.com wrote:

On Fri, 2017-04-28 at 13:23 +0900, TETSURO NAKAMURA wrote:

Hi Nova team,

I'm writing this e-mail because I'd like to have a discussion about
DPDK support at OpenStack Summit in Boston.

We have developed a dpdk-based patch panel named SPP[1], and we'd
like to start working on Openstack (ML2 driver) to develop
"networking-spp".

Especially, we'd like to use DPDK-ivshmem that was used to be used
to create "dpdkr" interface in ovs-dpdk[2].


To the best of my knowledge, IVSHMEM ports are no longer supported in
upstream. The documentation for this feature was recently removed from
OVS [1] stating:

  - The ivshmem library has been removed in DPDK since DPDK 16.11.
  - The instructions/scheme provided will not work with current
supported and future DPDK versions.
  - The linked patch needed to enable support in QEMU has never
been upstreamed and does not apply to the last 4 QEMU releases.
  - Userspace vhost has become the defacto OVS-DPDK path to the guest.

Note: I worked on DPDK vSwitch [2] way back when, and there were severe
security implications with sharing a chunk of host memory between
multiple guests (which is how IVSHMEM works). I'm not at all surprised
the feature was killed.


Security is only one of the issues. Upstream QEMU maintainers considered
the ivshmem device to have a seriously flawed design and discourage anyone
from using it. For anything network related QEMU maintainers strongly
recommand using vhost-user.

IIUC, there is some experimental work to create a virtio based replacement
for ivshmem, for non-network related vm-2-vm communications, but that is
not going to be something usable for a while yet. This however just
reinforces the point that ivshmem is considered obsolete / flawed
technology by QEMU maintainers.

Regards,
Daniel



--
Tetsuro Nakamura 
NTT Network Service Systems Laboratories
TEL:0422 59 6914(National)/+81 422 59 6914(International)
3-9-11, Midori-Cho Musashino-Shi, Tokyo 180-8585 Japan



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-08 Thread Octave J. Orgeron

Hi Jeremy,

I'm sure everyone would love to see Solaris open sourced again. I know I 
do! But unfortunately, it's not something within my power or control.


However, there is the reality that OpenStack wouldn't be as successful 
without commercial companies contributing to it. A good example is all 
of the excellent work done in Cinder and Neutron, where we have hundreds 
of drivers for networking gear, SDNs, NAS/SAN storage, etc. In those 
cases, none of those products that those drivers are written for are 
open sourced and they meet less resistance to committing code upstream. 
So I have to call BS on your comment that the community can't work with 
us because Solaris isn't open sourced.


Now for Oracle, we definitely need more 3rd party CI to make it easier 
to test our drivers, components, and patches against so that it's easier 
for the community to validate things. However, it takes time, resources, 
and money to make that happen. Hopefully that will get sorted out over 
time. But even if we make all of the investments in setting that up, we 
still need the upstream teams to come to the table and not shun us away 
just because we are Oracle :)


Octave



On 5/6/2017 6:26 AM, Jeremy Stanley wrote:

On 2017-05-05 15:35:16 -0600 (-0600), Octave J. Orgeron wrote:
[...]

If it's in support of Oracle specific technologies such as Solaris,

[...]

we are often shunned away because it's not Linux or "mainstream"
enough. A great example is how our Nova drivers for Solaris Zones,
Kernel Zones, and LDoms are turned away. So we have to spend extra
cycles maintaining our patches because they are shunned away from
getting into the gate.

[...]

Hopefully I'm not hitting a sore spot here, but bring back
OpenSolaris and the answer becomes simpler. The Microsoft and VMWare
devs have similar challenges in this regard because if you attempt
to combine your proprietary software with our free software, then
it's not something we're going to be able to test upstream (and the
burden to prove that it's working and soundly maintained is
substantially greater than if what you're integrating is also free
software).




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ptls] Come Pick up Mascot Stickers

2017-05-08 Thread Kendall Nelson
Hello PTLs!

The first pickup time will be from 2 to 4 pm today in the Foundation Lounge
Hynes 2nd Floor outside 206 from Ildiko Vancsa.

>From there you are free to distribute them as you like :)

-Kendall Nelson
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-08 Thread Octave J. Orgeron

Hi Kevin,

I agree that OpenStack may need to re-architect and rethink certain 
design choices. That eventually has to happen to any open source or 
commercial product where it becomes to bloated and complex. k8s is a 
good example of something that has grown rapidly and is simple at this 
point. But it's inevitable that k8s will become more complicated as 
features such as networking (firewalls, load balancing, SDN, etc.) get 
thrown into the mix. To keep the bloat and complexity down, it takes 
good architecture and governance just like anything else in the IT world.


I do agree that scalability and high-availability are definitely issues 
for OpenStack when you dig deeper into the sub-components. There is a 
lot of re-inventing of the wheel when you look at how distributed 
services are implemented inside of OpenStack and deficiencies. For some 
services you have a scheduler that can scale-out, but the conductor or 
worker process doesn't. A good example is cinder, where cinder-volume 
doesn't scale-out in a distributed manner and doesn't have a good 
mechanism for recovering when an instance fails. All across the services 
you see different methods for coordinating requests and tasks such as 
rabbitmq, redis, memcached, tooz, mysql, etc. So for an operator, you 
have to sift through those choices and configure the per-requisite 
infrastructure. This is a good example of a problem that should be 
solved with a single architecturally sound solution that all services 
can standardize on.


The problem in a lot of those cases comes down to development being 
detached from the actual use cases customers and operators are going to 
use in the real world. Having a distributed control plane with multiple 
instances of the api, scheduler, coordinator, and other processes is 
typically not testable without a larger hardware setup. When you get to 
large scale deployments, you need an active/active setup for the control 
plane. It's definitely not something you could develop for or test 
against on a single laptop with devstack. Especially, if you want to use 
more than a handful of the OpenStack services.


An OpenStack v2.0 may be the right way to address those issues and do 
the architecture work to get OpenStack to scale, reduce complexity, and 
make it easier for things like upgrades.


Octave

On 5/5/2017 5:44 PM, Fox, Kevin M wrote:

Note, when I say OpenStack below, I'm talking about 
nova/glance/cinder/neutron/horizon/heat/octavia/designate. No offence to the 
other projects intended. just trying to constrain the conversation a bit... 
Those parts are fairly comparable to what k8s provides.

I think part of your point is valid, that k8s isn't as feature rich in some 
ways, (networking for example), and will get more complex in time. But it has a 
huge amount of functionality for significantly less effort compared to an 
OpenStack deployment with similar functionality today.

I think there are some major things different between the two projects that are 
really paying off for k8s over OpenStack right now. We can use those as 
learning opportunities moving forward or the gap will continue to widen, as 
will the user migrations away from OpenStack. These are mostly architectural 
things.

Versions:
  * The real core of OpenStack is essentially version 1 + iterative changes.
  * k8s is essentially the third version of Borg. Plenty of room to ditch bad 
ideas/decisions.

That means OpenStack's architecture has essentially grown organically rather 
then being as carefully thought out. The backwards compatibility has been a 
good goal, but its so hard to upgrade most places burn it down and stand up 
something new anyway so a lot of work with a lot less payoff then you would 
think. Maybe it is time to consider OpenStack version 2...

I think OpenStack's greatest strength is its standardized api's. Thus far we've 
been changing the api's over time and keeping the implementation mostly the 
same... maybe we should consider keeping the api the same and switch some of 
the implementations out... It might take a while to get back to where we are 
now, but I suspect the overall solution would be much better now that we have 
so much experience with building the first one.

k8s and OpenStack do largely the same thing. get in user request, schedule the 
resource onto some machines and allow management/lifecycle of the thing.

Why then does k8s scalability goal target 5000 nodes and OpenStack really 
struggle with more then 300 nodes without a huge amount of extra work? I think 
its architecture. OpenStack really abuses rabbit, does a lot with relational 
databases that maybe are better done elsewhere, and forces isolation between 
projects that maybe is not the best solution.

Part of it I think is combined services. They don't have separate services for 
cinder-api/nova-api,neutron-api/heat-api/etc. Just kube-apiserver. same with 
the *-schedulers, just kube-scheduler. This means many fewer things to manage 
for 

Re: [openstack-dev] [Openstack-operators][heat]desire your feedback and join! And welcome on board!

2017-05-08 Thread Rico Lin
.

2017-05-04 12:08 GMT+08:00 Rico Lin :

> Hi all,
>
> Boston Summit is near, and we need your help and feedback! We really hope
> to improve your orchestration experiences,
> so *if you're a User, Operator, or Developer,* please join us on
> *`Large Orchestration Stacks
> `
>  *Forum
> session *(Wednesday, May 10, 5:20pm-6:00pm)*:
> To discuss large stacks works and plan and we welcome any users/ops/devs
> to join and give out your feedback or thoughts to help on improving
> orchestration experiences.
> *Here is the etherpad link so please share your opinions whether you're
> coming to the summit or not.  
> **https://etherpad.openstack.org/p/BOS-forum-Large-Heat-stacks
> *
>
>
> If you wish to learn more detail on heat from beginner
> welcome join our *`Heat- Project Onboarding
> `
>  *
> Forum session *(Tuesday, May 9, 2:00pm-3:30pm)*
> Feel free to contact me and share what you would like to learn from this
> session, which I will try my best to put something in.
>
>
> And if you're interested in overall project update
> welcome, join our *`Project Update - Heat
> `
> session (Monday, May 8, 5:30pm-6:10pm)*
>
>
>
> Also, there are a lot of heat relative talks
> ,
> so feel free to walk around and discover it out
>
> May The Force of OpenStack Be With You,
> Rico Lin (irc: ricolin)
>
>
>


-- 
May The Force of OpenStack Be With You,

*Rico Lin*irc: ricolin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [MassivelyDistributed] Fog / Edge / Massively Distributed Cloud Sessions during the summit

2017-05-08 Thread Edgar Magana
Hello,

In behalf of the User Community I would like to understand if this is 
considering officially to request to create the Working Group. I could have 
missed another email request the inclusion but if not, I would like to discuss 
the goals and objective from the WG. The User Committee will be glad to helping 
you out if anything needed.

I am cc the rest of the UC members.

Thanks,

Edgar

On 5/5/17, 1:16 PM, "lebre.adr...@free.fr"  wrote:

Dear all, 

A brief email to inform you about our schedule next week in Boston. 

In addition to interesting presentations that will deal with 
Fog/Edge/Massively Distributed Clouds challenges [1], I would like to highlight 
two important sessions: 

* A new Birds of a Feather session ``OpenStack on the Edge'' is now 
scheduled on Tuesday afternoon [2]. 
This will be the primary call to action covered by Jonathan Bryce during 
Monday's keynote about Edge Computing.
After introducing the goal of the WG, I will give the floor to participants 
to share their use-case (3/4 min for each presentation)
The Foundation has personally invited four large users that are planning 
for fog/edge computing.
This will guide the WG for the future and hopefully get more contributors 
involved.
Moreover, many of the Foundation staff already planned to attend and talk 
about the in-planning-phase OpenDev event and get input. 
The etherpad for this session is available at [3].

* Our regular face-to-face meeting for current and new members to discuss 
next cycle plans is still scheduled on Wednesday afternoon [5]
The etherpad for this session is available at [5]

I encourage all of you to attend both sessions.
See you in Boston and have a safe trip
ad_rien_

[1] 
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.openstack.org_summit_boston-2D2017_summit-2Dschedule_global-2Dsearch-3Ft-3Dedge=DwIGaQ=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ=OHBOWu_IZimaHf_g66AuXvYldFF594SpFk35-sbkZ6g=qez9zHbrE7PfeS8vjgS2nql6xeYGg5heiAnBNMCy5os=
  
[2] 
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.openstack.org_summit_boston-2D2017_summit-2Dschedule_events_18988_openstack-2Don-2Dthe-2Dedge-2Dfogedgemassively-2Ddistributed-2Dclouds-2Dbirds-2Dof-2Da-2Dfeather=DwIGaQ=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ=OHBOWu_IZimaHf_g66AuXvYldFF594SpFk35-sbkZ6g=fnH2yZ09stEM5L-sOWuq79x_fySLwPOZu308ue7TmCU=
 
[3] 
https://urldefense.proofpoint.com/v2/url?u=https-3A__etherpad.openstack.org_p_BOS-2DUC-2Dbrainstorming-2DMassivelyDistributed-2DFog-2DEdge=DwIGaQ=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ=OHBOWu_IZimaHf_g66AuXvYldFF594SpFk35-sbkZ6g=vJ_TxPslQDoJoV7e_KBxSwyBxhkTAiHbDtVhqsLrZIU=
 
[4] 
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.openstack.org_summit_boston-2D2017_summit-2Dschedule_events_18671_fogedgemassively-2Ddistributed-2Dclouds-2Dworking-2Dgroup=DwIGaQ=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ=OHBOWu_IZimaHf_g66AuXvYldFF594SpFk35-sbkZ6g=TgruXx4CSs5xxZKdZF151lxc_EpYduii1UzYfZ7Vvbk=
 
[5] 
https://urldefense.proofpoint.com/v2/url?u=https-3A__etherpad.openstack.org_p_Massively-5Fdistributed-5Fwg-5Fboston-5Fsummit=DwIGaQ=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ=OHBOWu_IZimaHf_g66AuXvYldFF594SpFk35-sbkZ6g=FwuahgZ9cBZnR8SGUBhL4yQ91CqyD947087CAbWsZ8s=
 

https://urldefense.proofpoint.com/v2/url?u=https-3A__wiki.openstack.org_wiki_Fog-5FEdge-5FMassively-5FDistributed-5FClouds=DwIGaQ=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ=OHBOWu_IZimaHf_g66AuXvYldFF594SpFk35-sbkZ6g=vvVLqU15CB0OHPOhBEoWIFRAHsbQ8v2K_mtLgfyY66I=
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev=DwIGaQ=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ=OHBOWu_IZimaHf_g66AuXvYldFF594SpFk35-sbkZ6g=sUsMY0BH6IsgM6M8UZemzNJ1UAyLKYQJUd1VihyMmNk=
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] PTG in Denver

2017-05-08 Thread Andrea Frittoli
Hello team,

I'm trying to get an idea about how many of us (QA) will be (or intend to
be) at the PTG in Denver.
Could you reply to me only or ping in IRC and let me know?

Thanks

andrea
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-08 Thread Bogdan Dobrelya
On 08.05.2017 16:06, Doug Hellmann wrote:
>>>
>>> option #3: Do not support or nurse gates for stable branches upstream.
>>> Instead, only create and close them and attach 3rd party gating, if
>>> asked by contributors willing to support LTS and nurse their gates.
>>> Note, closing a branch should be an exceptional case, if only no one
>>> willing to support and gate it for a long.
>>
>> As i mentioned before, folks can join the Stable Team and make things
>> like this happen. Won't happen by an email to the mailing list.

Good point.
There are *several* action items to do first, by the topic discussion
results (as I see it):
* Propose changes for stable branch maintaining policy (which an option
of #1/#2/#3 to pick?)
* Make a TC vote, I suppose, accept the changes officially
* Given the accepted change is #3, start implementation steps, like:
* Stop all stable/* gating jobs and merge freeze them
* Join the Stable Team and make things happen, but first:
* Get hardware for 3rd party CI gating, from willing to contribute
operators' associated enterprises. Yes, the option #3 assumes that as a
prerequisite for stable branches to "unfreeze" and step onto its "LTS
adoption" path.
* Setting up 3rd party CI, joining and learning from openstack infra
team, for anyone who wants to help and who is going to consume and
submit stable/* patches upstream instead of downstream forks, from now on.
* Unfreeze stable branches
* End up nominating more Stable Team core members with +2+1 from
*operators* world and allowing changes to get in fast.

So yes, it is important to encourage people to join, but one who is
willing to contribute may be not enough to bring with him required
hardware and operators from enterprises willing to contribute as well.
*First of all, those folks need to be interested to stop patching things
downstream*.

And if started upside down, like jumping in and starting contributing
things w/o other required changes, this would bring all but results we'd
really like to see:
* More folks pushing hard for making backports and upstream gating them
against 'vanilla' stable branches, which *no one* of operators (read
enterprises) consume as is - everything is being done downstream,
rebased on top and needed to be retested again.
* Folks abandoning Stable Team in a few months as they see the value of
work done for 'vanilla' backporting and gating is near to a zero.

>>
>> Thanks,
>> Dims
> 
> Right. We need to change the tone of this thread from "you should do X"
> to "I want to do X, where should I start?"
> 
> Doug
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-08 Thread Doug Hellmann
Excerpts from Davanum Srinivas (dims)'s message of 2017-05-08 06:12:51 -0400:
> On Mon, May 8, 2017 at 3:52 AM, Bogdan Dobrelya  wrote:
> > On 06.05.2017 23:06, Doug Hellmann wrote:
> >> Excerpts from Thierry Carrez's message of 2017-05-04 16:14:07 +0200:
> >>> Chris Dent wrote:
>  On Wed, 3 May 2017, Drew Fisher wrote:
> > "Most large customers move slowly and thus are running older versions,
> > which are EOL upstream sometimes before they even deploy them."
> 
>  Can someone with more of the history give more detail on where the
>  expectation arose that upstream ought to be responsible things like
>  long term support? I had always understood that such features were
>  part of the way in which the corporately avaialable products added
>  value?
> >>>
> >>> We started with no stable branches, we were just producing releases and
> >>> ensuring that updates vaguely worked from N-1 to N. There were a lot of
> >>> distributions, and they all maintained their own stable branches,
> >>> handling backport of critical fixes. That is a pretty classic upstream /
> >>> downstream model.
> >>>
> >>> Some of us (including me) spotted the obvious duplication of effort
> >>> there, and encouraged distributions to share that stable branch
> >>> maintenance work rather than duplicate it. Here the stable branches were
> >>> born, mostly through a collaboration between Red Hat developers and
> >>> Canonical developers. All was well. Nobody was saying LTS back then
> >>> because OpenStack was barely usable so nobody wanted to stay on any
> >>> given version for too long.
> >>>
> >>> Maintaining stable branches has a cost. Keeping the infrastructure that
> >>> ensures that stable branches are actually working is a complex endeavor
> >>> that requires people to constantly pay attention. As time passed, we saw
> >>> the involvement of distro packagers become more limited. We therefore
> >>> limited the number of stable branches (and the length of time we
> >>> maintained them) to match the staffing of that team. Fast-forward to
> >>> today: the stable team is mostly one person, who is now out of his job
> >>> and seeking employment.
> >>>
> >>> In parallel, OpenStack became more stable, so the demand for longer-term
> >>> maintenance is stronger. People still expect "upstream" to provide it,
> >>> not realizing upstream is made of people employed by various
> >>> organizations, and that apparently their interest in funding work in
> >>> that area is pretty dead.
> >>>
> >>> I agree that our current stable branch model is inappropriate:
> >>> maintaining stable branches for one year only is a bit useless. But I
> >>> only see two outcomes:
> >>>
> >>> 1/ The OpenStack community still thinks there is a lot of value in doing
> >>> this work upstream, in which case organizations should invest resources
> >>> in making that happen (starting with giving the Stable branch
> >>> maintenance PTL a job), and then, yes, we should definitely consider
> >>> things like LTS or longer periods of support for stable branches, to
> >>> match the evolving usage of OpenStack.
> >>>
> >>> 2/ The OpenStack community thinks this is better handled downstream, and
> >>> we should just get rid of them completely. This is a valid approach, and
> >>> a lot of other open source communities just do that.
> >>
> >> Dropping stable branches completely would mean no upstream bugfix
> >> or security releases at all. I don't think we want that.
> >>
> >
> > I'd like to bring this up once again:
> >
> > option #3: Do not support or nurse gates for stable branches upstream.
> > Instead, only create and close them and attach 3rd party gating, if
> > asked by contributors willing to support LTS and nurse their gates.
> > Note, closing a branch should be an exceptional case, if only no one
> > willing to support and gate it for a long.
> 
> As i mentioned before, folks can join the Stable Team and make things
> like this happen. Won't happen by an email to the mailing list.
> 
> Thanks,
> Dims

Right. We need to change the tone of this thread from "you should do X"
to "I want to do X, where should I start?"

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Discussions for DPDK support in OpenStack

2017-05-08 Thread Daniel P. Berrange
On Fri, Apr 28, 2017 at 09:38:38AM +0100, sfinu...@redhat.com wrote:
> On Fri, 2017-04-28 at 13:23 +0900, TETSURO NAKAMURA wrote:
> > Hi Nova team,
> > 
> > I'm writing this e-mail because I'd like to have a discussion about
> > DPDK support at OpenStack Summit in Boston.
> > 
> > We have developed a dpdk-based patch panel named SPP[1], and we'd
> > like to start working on Openstack (ML2 driver) to develop
> > "networking-spp".
> > 
> > Especially, we'd like to use DPDK-ivshmem that was used to be used
> > to create "dpdkr" interface in ovs-dpdk[2].
> 
> To the best of my knowledge, IVSHMEM ports are no longer supported in
> upstream. The documentation for this feature was recently removed from
> OVS [1] stating:
> 
>   - The ivshmem library has been removed in DPDK since DPDK 16.11.
>   - The instructions/scheme provided will not work with current
>     supported and future DPDK versions.
>   - The linked patch needed to enable support in QEMU has never
>     been upstreamed and does not apply to the last 4 QEMU releases.
>   - Userspace vhost has become the defacto OVS-DPDK path to the guest.
> 
> Note: I worked on DPDK vSwitch [2] way back when, and there were severe
> security implications with sharing a chunk of host memory between
> multiple guests (which is how IVSHMEM works). I'm not at all surprised
> the feature was killed.

Security is only one of the issues. Upstream QEMU maintainers considered
the ivshmem device to have a seriously flawed design and discourage anyone
from using it. For anything network related QEMU maintainers strongly
recommand using vhost-user.

IIUC, there is some experimental work to create a virtio based replacement
for ivshmem, for non-network related vm-2-vm communications, but that is
not going to be something usable for a while yet. This however just
reinforces the point that ivshmem is considered obsolete / flawed
technology by QEMU maintainers.

Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - are you attending the Boston summit?

2017-05-08 Thread Kevin Benton
Let's plan for a social Wednesday night. I'll update this with a location
once we find a place.

On May 8, 2017 08:50, "MCCASLAND, TREVOR"  wrote:

> Looking forward to it! RSVP? +1
>
>
>
> *From:* Sukhdev Kapur [mailto:sukhdevka...@gmail.com]
> *Sent:* Saturday, May 06, 2017 12:31 AM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* Re: [openstack-dev] [neutron] - are you attending the Boston
> summit?
>
>
>
> Hey Neutron Folks,
>
>
>
> Following our past tradition, we should have Neutron dinner while we are
> all in Boston.
>
> Miguel has few places in mind. I would propose that we nominate him as the
> dinner organizer lieutenant.
>
>
>
> Miguel, I hope you will take us to some cool place.
>
>
>
> Thanks
>
> -Sukhdev
>
>
>
>
>
> On Thu, Apr 20, 2017 at 4:31 PM, Kevin Benton  wrote:
>
> Hi,
>
>
>
> If you are a Neutron developer attending the Boston summit, please add
> your name to the etherpad here so we can plan a Neutron social and easily
> coordinate in person meetings: https://etherpad.openstack.org/p/neutron-
> boston-summit-attendees
> 
>
>
>
> Cheers,
>
> Kevin Benton
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - are you attending the Boston summit?

2017-05-08 Thread MCCASLAND, TREVOR
Looking forward to it! RSVP? +1

From: Sukhdev Kapur [mailto:sukhdevka...@gmail.com]
Sent: Saturday, May 06, 2017 12:31 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [neutron] - are you attending the Boston summit?

Hey Neutron Folks,

Following our past tradition, we should have Neutron dinner while we are all in 
Boston.
Miguel has few places in mind. I would propose that we nominate him as the 
dinner organizer lieutenant.

Miguel, I hope you will take us to some cool place.

Thanks
-Sukhdev


On Thu, Apr 20, 2017 at 4:31 PM, Kevin Benton 
> wrote:
Hi,

If you are a Neutron developer attending the Boston summit, please add your 
name to the etherpad here so we can plan a Neutron social and easily coordinate 
in person meetings: 
https://etherpad.openstack.org/p/neutron-boston-summit-attendees

Cheers,
Kevin Benton

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Discussions for DPDK support in OpenStack

2017-05-08 Thread TETSURO NAKAMURA

Thank you for reply!

On 2017/04/28 4:38, sfinu...@redhat.com wrote:

On Fri, 2017-04-28 at 13:23 +0900, TETSURO NAKAMURA wrote:

Hi Nova team,

I'm writing this e-mail because I'd like to have a discussion about
DPDK support at OpenStack Summit in Boston.

We have developed a dpdk-based patch panel named SPP[1], and we'd
like to start working on Openstack (ML2 driver) to develop
"networking-spp".

Especially, we'd like to use DPDK-ivshmem that was used to be used
to create "dpdkr" interface in ovs-dpdk[2].


To the best of my knowledge, IVSHMEM ports are no longer supported in
upstream. The documentation for this feature was recently removed from
OVS [1] stating:

  - The ivshmem library has been removed in DPDK since DPDK 16.11.
  - The instructions/scheme provided will not work with current
supported and future DPDK versions.
  - The linked patch needed to enable support in QEMU has never
been upstreamed and does not apply to the last 4 QEMU releases.
  - Userspace vhost has become the defacto OVS-DPDK path to the guest.

Note: I worked on DPDK vSwitch [2] way back when, and there were severe
security implications with sharing a chunk of host memory between
multiple guests (which is how IVSHMEM works). I'm not at all surprised
the feature was killed.


We have issued a blueprint[3] for that use case.


Per above, I don't think this is necessary. vhost-user ports already
work as expected in nova.



Yes, IVSHMEM is a critical issue for multitenancy.
Still, we'd like to state that there is a use case for private cloud 
such as carrier NFV in which sharing host memory doesn't become a 
critical issue. In that use case we'd like to use ivshmem for good 
performance.



As we are attending Boston Summit, could you have a discussion with
us at the Summit?


I'll be around the summit (IRC: sfinucan) if you want to chat more.
However, I'd suggest reaching out to Sean Mooney or Igor Duarte Cardoso
(both CCd) if you want further information about general support of
OVS-DPDK in OpenStack and DPDK acceleration in SFC, respectively. I'd
also suggest looking at networking-ovs-dpdk [3] which contains a lot of
helper tools for using OVS-DPDK in OpenStack, along with links to a
Brighttalk video I recently gave regarding the state of OVS-DPDK in
OpenStack.


Thank you very much for information !
I'm already reaching out to Sean Mooney in another case,
so I try to reach out to Igor Duarte Cardoso-san.



Hope this helps,
Stephen

[1] https://github.com/openvswitch/ovs/commit/90ca71dd317fea1ccf0040389
dae895aa7b2b561
[2] https://github.com/01org/dpdk-ovs
[3] https://github.com/openstack/networking-ovs-dpdk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Tetsuro Nakamura 
NTT Network Service Systems Laboratories
TEL:0422 59 6914(National)/+81 422 59 6914(International)
3-9-11, Midori-Cho Musashino-Shi, Tokyo 180-8585 Japan



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Validations before upgrades and updates

2017-05-08 Thread Marios Andreou
Hi folks, after some discussion locally with colleagues about improving the
upgrades experience, one of the items that came up was pre-upgrade and
update validations. I took an AI to look at the current status of
tripleo-validations [0] and posted a simple WIP [1] intended to be run
before an undercloud update/upgrade and which just checks service status.
It was pointed out by shardy that for such checks it is better to instead
continue to use the per-service  manifests where possible like [2] for
example where we check status before N..O major upgrade. There may still be
some undercloud specific validations that we can land into the
tripleo-validations repo (thinking about things like the neutron
networks/ports, validating the current nova nodes state etc?).

So do folks have any thoughts about this subject - for example the kinds of
things we should be checking - Steve said he had some reviews in progress
for collecting the overcloud ansible puppet/docker config into an ansible
playbook that the operator can invoke for upgrade of the 'manual' nodes
(for example compute in the N..O workflow) - the point being that we can
add more per-service ansible validation tasks into the service manifests
for execution when the play is run by the operator - but I'll let Steve
point at and talk about those.

cheers, marios

[0] https://github.com/openstack/tripleo-validations
[1] https://review.openstack.org/#/c/462918/
[2]  https://github.com/openstack/tripleo-heat-templates/blob/
stable/ocata/puppet/services/neutron-api.yaml#L197
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kolla-ansible] Proposing Bertrand Lallau for kolla and kolla-ansible core

2017-05-08 Thread Vikram Hosakote (vhosakot)
+1  Great job Bertrand!

Regards,
Vikram Hosakote
IRC:  vhosakot

From: Michał Jastrzębski >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Tuesday, May 02, 2017 at 10:13 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [kolla][kolla-ansible] Proposing Bertrand Lallau for 
kolla and kolla-ansible core

Hello,

It's my pleasure to start another core reviewer vote. Today it's
Bertrand (blallau). Consider this mail my +1 vote. Members of
kolla-ansible and kolla core team, please cast your votes:) Voting
will be open for 2 weeks (until 16th of May).

I also wanted to say that Bertrand went through our core mentorship
program (if only for few weeks because he did awesome job before too)
:)

Thank you,
Michal
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] notification update week 19

2017-05-08 Thread Balazs Gibizer

Hi,

Here is the status update / focus setting mail about notification work
for week 19.

Bugs

[Medium] https://bugs.launchpad.net/nova/+bug/1657428 The instance 
notifications are sent with inconsistent timestamp format. The solution 
still needs time and effort from the subteam 
https://review.openstack.org/#/c/421981


[Medium] https://bugs.launchpad.net/nova/+bug/1687012
flavor-delete notification should not try to lazy-load projects
The patch https://review.openstack.org/#/c/461032 needs core review.


Versioned notification transformation
-
The volume_attach and detach patches merged last week. Thanks for 
everybody who made that happen. Currently the following three 
transformation patches are in good shape so let's focus on them in the 
coming weeks:
* https://review.openstack.org/#/c/396225/ Transform 
instance.trigger_crash_dump notification
* https://review.openstack.org/#/c/396210/ Transform aggregate.add_host 
notification
* https://review.openstack.org/#/c/396211/ Transform 
aggregate.remove_host notification



Searchlight integration
---
bp additional-notification-fields-for-searchlight
~

The keypairs patch has been split to add whole keypair objects only to 
the instance.create notification and add only the key_name to every 
instance. notification:
* https://review.openstack.org/#/c/463001 Add separate instance.create 
payload type
* https://review.openstack.org/#/c/419730 Add keypairs field to 
InstanceCreatePayload
* https://review.openstack.org/#/c/463002 Add key_name field to 
InstancePayload


Adding BDM to instance. is also in the pipe:
* https://review.openstack.org/#/c/448779/

There is also a separate patch to add tags to instance.create:
https://review.openstack.org/#/c/459493/ Add tags to instance.create 
Notification



Weekly meeting
--
The notification subteam holds it's weekly meeting on Tuesday 17:00 UTC 
on openstack-meeting-4. Due to the Boston forum this week's meeting is 
cancelled and the next meeting will be held on 16th of May.

https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170516T17

Cheers,
gibi




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-08 Thread Davanum Srinivas
On Mon, May 8, 2017 at 3:52 AM, Bogdan Dobrelya  wrote:
> On 06.05.2017 23:06, Doug Hellmann wrote:
>> Excerpts from Thierry Carrez's message of 2017-05-04 16:14:07 +0200:
>>> Chris Dent wrote:
 On Wed, 3 May 2017, Drew Fisher wrote:
> "Most large customers move slowly and thus are running older versions,
> which are EOL upstream sometimes before they even deploy them."

 Can someone with more of the history give more detail on where the
 expectation arose that upstream ought to be responsible things like
 long term support? I had always understood that such features were
 part of the way in which the corporately avaialable products added
 value?
>>>
>>> We started with no stable branches, we were just producing releases and
>>> ensuring that updates vaguely worked from N-1 to N. There were a lot of
>>> distributions, and they all maintained their own stable branches,
>>> handling backport of critical fixes. That is a pretty classic upstream /
>>> downstream model.
>>>
>>> Some of us (including me) spotted the obvious duplication of effort
>>> there, and encouraged distributions to share that stable branch
>>> maintenance work rather than duplicate it. Here the stable branches were
>>> born, mostly through a collaboration between Red Hat developers and
>>> Canonical developers. All was well. Nobody was saying LTS back then
>>> because OpenStack was barely usable so nobody wanted to stay on any
>>> given version for too long.
>>>
>>> Maintaining stable branches has a cost. Keeping the infrastructure that
>>> ensures that stable branches are actually working is a complex endeavor
>>> that requires people to constantly pay attention. As time passed, we saw
>>> the involvement of distro packagers become more limited. We therefore
>>> limited the number of stable branches (and the length of time we
>>> maintained them) to match the staffing of that team. Fast-forward to
>>> today: the stable team is mostly one person, who is now out of his job
>>> and seeking employment.
>>>
>>> In parallel, OpenStack became more stable, so the demand for longer-term
>>> maintenance is stronger. People still expect "upstream" to provide it,
>>> not realizing upstream is made of people employed by various
>>> organizations, and that apparently their interest in funding work in
>>> that area is pretty dead.
>>>
>>> I agree that our current stable branch model is inappropriate:
>>> maintaining stable branches for one year only is a bit useless. But I
>>> only see two outcomes:
>>>
>>> 1/ The OpenStack community still thinks there is a lot of value in doing
>>> this work upstream, in which case organizations should invest resources
>>> in making that happen (starting with giving the Stable branch
>>> maintenance PTL a job), and then, yes, we should definitely consider
>>> things like LTS or longer periods of support for stable branches, to
>>> match the evolving usage of OpenStack.
>>>
>>> 2/ The OpenStack community thinks this is better handled downstream, and
>>> we should just get rid of them completely. This is a valid approach, and
>>> a lot of other open source communities just do that.
>>
>> Dropping stable branches completely would mean no upstream bugfix
>> or security releases at all. I don't think we want that.
>>
>
> I'd like to bring this up once again:
>
> option #3: Do not support or nurse gates for stable branches upstream.
> Instead, only create and close them and attach 3rd party gating, if
> asked by contributors willing to support LTS and nurse their gates.
> Note, closing a branch should be an exceptional case, if only no one
> willing to support and gate it for a long.

As i mentioned before, folks can join the Stable Team and make things
like this happen. Won't happen by an email to the mailing list.

Thanks,
Dims

>> Doug
>>
>>>
>>> The current reality in terms of invested resources points to (2). I
>>> personally would prefer (1), because that lets us address security
>>> issues more efficiently and avoids duplicating effort downstream. But
>>> unfortunately I don't control where development resources are posted.
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

[openstack-dev] [openstack-ansible] Bug triage cancelled this week

2017-05-08 Thread Jean-Philippe Evrard
Hello everyone,

We’ll not have an openstack-ansible bug triage this week, as many of our 
contributors are on the summit!

See you next week!
Best regards,

Jean-Philippe Evrard
(@)evrardjp


Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ui] another i18n proposal for heat templates 'description' help strings

2017-05-08 Thread Peng Wu
Hi Julie,

  I generated one example javascript file containing the translatable
strings.
  URL: https://pwu.fedorapeople.org/openstack-i18n/tripleo/tripleo-heat
-templates.js

  And the code to generate the above file is in:
  https://pwu.fedorapeople.org/openstack-i18n/tripleo/

  The generated file need to be copied to tripleo-ui project, and
translate as other javascript files.

  Please review it, thanks!

Regards,
  Peng


On Mon, 2017-04-10 at 16:13 +0100, Julie Pichon wrote:
> Hi Peng,
> 
> I added some thoughts in-line, let me know what you think.
> 
> On 10 April 2017 at 08:10, Peng Wu  wrote:
> > Hi,
> > 
> >   In TripleO UI project users requested translation of the web UI.
> > But
> > some web UI strings are displayed from heat template files in
> > tripleo-
> > heat-templates project.
> > 
> >   In order to get translated templates displayed in tripleo-ui, we
> > propose another solution as follows, which needs to change code in
> > tripleo-heat-templates and tripleo-ui projects.
> > 
> >   I18n proposal for Heat templates 'description' help strings
> > 
> >   1. Update tripleo-heat-templates to generate the javascript files
> > to
> > include all translation strings, like "tripleo-heat-templates.js"
> > 
> >  a. Need to write python script to extract "title" and
> > "description" field from yaml files and generate "tripleo-heat-
> > templates.js" for react-intl usage in tripleo-ui
> 
> I think extracting the strings directly into js/json format may be
> not
> be a viable option, because it isn't a format supported by
> Zanata [1].
> 
> For tripleo-ui itself we use react-intl which expects json, and work
> with scripts to convert to/from pot and po (see [2]) which are fully
> supported by Zanata.
> 
> Or is the idea that we'd also generate pot/po as intermediary steps
> and
> only store json in the repo?
> 
> >  b. Use default message as message id or consider nodejs-i18n
> > for
> > tripleo-ui
> 
> I'm wary of considering a library change considering the amount of
> churn it would cause in the code base for all the existing strings,
> plus that would then make backports more difficult. It really needs
> to
> be considered carefully.
> 
> > 
> >   2. Update tripleo-ui to use "tripleo-heat-templates.js"
> > 
> >  a. Write some script to sync "tripleo-heat-templates.js" from
> > tripleo-heat-templates
> > 
> >  b. Call formatMessage function for "title" and "description"
> > field
> > with message id (use default message) and default message or
> > consider
> > nodejs-i18n for tripleo-ui
> > 
> >   Refer URL for message id: https://github.com/yahoo/react-intl/iss
> > ues/
> > 912
> 
> Could you explain a bit more the issue with the ids? I see us
> defining
> an id in every message [3] and this is how they are referenced in the
> locale json [4] (the mapping is not done by message, but by ID).
> 
> When it comes to the THT message, I think they all have a hierarchy
> that perhaps could be used as a key to map between the original
> string
> and the translation? Something along the lines of
> OS::TripleO::Services::Apache::ApacheMaxRequestWorkers::description,
> whichever form the API gives us at the moment.
> 
> >   Please evaluate it, thanks!
> 
> Thank you!
> 
> Julie
> 
> [1] http://docs.zanata.org/en/release/user-guide/projects/project-typ
> es/#supported-types
> [2] https://github.com/openstack/tripleo-ui/blob/master/docs/translat
> ion.rst#extracting-messages-from-components
> [3] https://github.com/openstack/tripleo-ui/blob/master/src/js/compon
> ents/nodes/Nodes.js#L17
> [4] https://github.com/openstack/tripleo-ui/blob/master/i18n/locales/
> es.json#L3

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-ovn] metadata agent implementation

2017-05-08 Thread Miguel Angel Ajo Pelayo
On Mon, May 8, 2017 at 2:48 AM, Michael Still  wrote:

> It would be interesting for this to be built in a way where other
> endpoints could be added to the list that have extra headers added to them.
>
> For example, we could end up with something quite similar to EC2 IAMS if
> we could add headers on the way through for requests to OpenStack endpoints.
>
> Do you think the design your proposing will be extensible like that?
>


I believe we may focus on achieving parity with the neutron reference
implementation first, and later on what you're proposing probably needs to
modelled on the neutron side.

Could you provide a practical example of how that would work anyway?


>
> Thanks,
> Michael
>
>
>
>
> On Fri, May 5, 2017 at 10:07 PM, Daniel Alvarez Sanchez <
> dalva...@redhat.com> wrote:
>
>> Hi folks,
>>
>> Now that it looks like the metadata proposal is more refined [0], I'd like
>> to get some feedback from you on the driver implementation.
>>
>> The ovn-metadata-agent in networking-ovn will be responsible for
>> creating the namespaces, spawning haproxies and so on. But also,
>> it must implement most of the "old" neutron-metadata-agent functionality
>> which listens on a UNIX socket and receives requests from haproxy,
>> adds some headers and forwards them to Nova. This means that we can
>> import/reuse big part of neutron code.
>>
>> Makes sense, you would avoid this way, depending on an extra co-hosted
service, reducing this way deployment complexity.


> I wonder what you guys think about depending on neutron tree for the
>> agent implementation despite we can benefit from a lot of code reuse.
>> On the other hand, if we want to get rid of this dependency, we could
>> probably write the agent "from scratch" in C (what about having C
>> code in the networking-ovn repo?) and, at the same time, it should
>> buy us a performance boost (probably not very noticeable since it'll
>> respond to requests from local VMs involving a few lookups and
>> processing simple HTTP requests; talking to nova would take most
>> of the time and this only happens at boot time).
>>
>
I would try to keep that part in Python, as everything on the networking-ovn
repo. I remember that Jakub made lots of improvements on the
neutron-metadata-agent area by caching, I'd make sure we reuse that if
it's of use to us (not sure if we used it for nova communication or not).

The neutron metadata agent, apparently has a get_ports RPC call [2] to
neutron-server plugin. We don't want RPC calls but ovsdb to get that info,
I have vague proof about caching also being used for those requests [1],
but with ovsdb we have that for free.

I don't know, the agent is 300 LOC, it seems to me like a whole re-write in
python (copying whatever is necessary) could be a reasonable way, but I
guess that trying to go down that rabbit hole would tell you better if I'm
wrong or if it makes sense.


>
>> I would probably aim for a Python implementation
>>
> +1000


> reusing/importing
>> code from neutron tree but I'm not sure how we want to deal with
>> changes in neutron codebase (we're actually importing code now).
>> Looking forward to reading your thoughts :)
>>
>
I guess the neutron-ns-metadata haproxy spawning [3] can be reused
from neutron, I wonder if it would make sense to move that to neutron_lib?
I believe that's the key thing that can be reused,

if we don't reuse it: we need to maintain it in two places,
if we reuse it, we can be broken by changes in neutron repo,
but I'm sure we're flexible enough to react to such changes,

Cheers! :D


>
>> Thanks,
>> Daniel
>>
>> [0] https://review.openstack.org/#/c/452811/
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Rackspace Australia
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-08 Thread Bogdan Dobrelya
On 06.05.2017 23:06, Doug Hellmann wrote:
> Excerpts from Thierry Carrez's message of 2017-05-04 16:14:07 +0200:
>> Chris Dent wrote:
>>> On Wed, 3 May 2017, Drew Fisher wrote:
 "Most large customers move slowly and thus are running older versions,
 which are EOL upstream sometimes before they even deploy them."
>>>
>>> Can someone with more of the history give more detail on where the
>>> expectation arose that upstream ought to be responsible things like
>>> long term support? I had always understood that such features were
>>> part of the way in which the corporately avaialable products added
>>> value?
>>
>> We started with no stable branches, we were just producing releases and
>> ensuring that updates vaguely worked from N-1 to N. There were a lot of
>> distributions, and they all maintained their own stable branches,
>> handling backport of critical fixes. That is a pretty classic upstream /
>> downstream model.
>>
>> Some of us (including me) spotted the obvious duplication of effort
>> there, and encouraged distributions to share that stable branch
>> maintenance work rather than duplicate it. Here the stable branches were
>> born, mostly through a collaboration between Red Hat developers and
>> Canonical developers. All was well. Nobody was saying LTS back then
>> because OpenStack was barely usable so nobody wanted to stay on any
>> given version for too long.
>>
>> Maintaining stable branches has a cost. Keeping the infrastructure that
>> ensures that stable branches are actually working is a complex endeavor
>> that requires people to constantly pay attention. As time passed, we saw
>> the involvement of distro packagers become more limited. We therefore
>> limited the number of stable branches (and the length of time we
>> maintained them) to match the staffing of that team. Fast-forward to
>> today: the stable team is mostly one person, who is now out of his job
>> and seeking employment.
>>
>> In parallel, OpenStack became more stable, so the demand for longer-term
>> maintenance is stronger. People still expect "upstream" to provide it,
>> not realizing upstream is made of people employed by various
>> organizations, and that apparently their interest in funding work in
>> that area is pretty dead.
>>
>> I agree that our current stable branch model is inappropriate:
>> maintaining stable branches for one year only is a bit useless. But I
>> only see two outcomes:
>>
>> 1/ The OpenStack community still thinks there is a lot of value in doing
>> this work upstream, in which case organizations should invest resources
>> in making that happen (starting with giving the Stable branch
>> maintenance PTL a job), and then, yes, we should definitely consider
>> things like LTS or longer periods of support for stable branches, to
>> match the evolving usage of OpenStack.
>>
>> 2/ The OpenStack community thinks this is better handled downstream, and
>> we should just get rid of them completely. This is a valid approach, and
>> a lot of other open source communities just do that.
> 
> Dropping stable branches completely would mean no upstream bugfix
> or security releases at all. I don't think we want that.
> 

I'd like to bring this up once again:

option #3: Do not support or nurse gates for stable branches upstream.
Instead, only create and close them and attach 3rd party gating, if
asked by contributors willing to support LTS and nurse their gates.
Note, closing a branch should be an exceptional case, if only no one
willing to support and gate it for a long.

> Doug
> 
>>
>> The current reality in terms of invested resources points to (2). I
>> personally would prefer (1), because that lets us address security
>> issues more efficiently and avoids duplicating effort downstream. But
>> unfortunately I don't control where development resources are posted.
>>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev