Re: [openstack-dev] [Zun] Propose addition of Zun core team and removal notice

2017-06-21 Thread Shuu Mutou
+1 to all from me.

Welcome Shunli! And greate thanks to Dims and Yanyan!!.

Best regards,
Shu

> -Original Message-
> From: Kumari, Madhuri [mailto:madhuri.kum...@intel.com]
> Sent: Wednesday, June 21, 2017 12:30 PM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [Zun] Propose addition of Zun core team and
> removal notice
> 
> +1 from me as well.
> 
> 
> 
> Thanks Dims and Yanyan for you contribution to Zun :)
> 
> 
> 
> Regards,
> 
> Madhuri
> 
> 
> 
> From: Kevin Zhao [mailto:kevin.z...@linaro.org]
> Sent: Wednesday, June 21, 2017 6:37 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [Zun] Propose addition of Zun core team and
> removal notice
> 
> 
> 
> +1 for me.
> 
> Thx!
> 
> 
> 
> On 20 June 2017 at 13:50, Pradeep Singh   > wrote:
> 
>   +1 from me,
> 
>   Thanks Shunli for your great work :)
> 
> 
> 
>   On Tue, Jun 20, 2017 at 10:02 AM, Hongbin Lu   > wrote:
> 
>   Hi all,
> 
> 
> 
>   I would like to propose the following change to the Zun
> core team:
> 
> 
> 
>   + Shunli Zhou (shunliz)
> 
> 
> 
>   Shunli has been contributing to Zun for a while and did
> a lot of work. He has completed the BP for supporting resource claim and
> be closed to finish the filter scheduler BP. He showed a good understanding
> of the Zun’s code base and expertise on other OpenStack projects. The
> quantity [1] and quality of his submitted code also shows his qualification.
> Therefore, I think he will be a good addition to the core team.
> 
> 
> 
>   In addition, I have a removal notice. Davanum Srinivas
> (Dims) and Yanyan Hu requested to be removed from the core team. Dims had
> been helping us since the inception of the project. I treated him as mentor
> and his guidance is always helpful for the whole team. As the project becomes
> mature and stable, I agree with him that it is time to relieve him from
> the core reviewer responsibility because he has many other important
> responsibilities for the OpenStack community. Yanyan’s leaving is because
> he has been relocated and focused on an out-of-OpenStack area. I would like
> to take this chance to thank Dims and Yanyan for their contribution to Zun.
> 
> 
> 
>   Core reviewers, please cast your vote on this proposal.
> 
> 
> 
>   Best regards,
> 
>   Hongbin
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>   __
> 
>   OpenStack Development Mailing List (not for usage
> questions)
>   Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> 
>   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-
> dev
> 
> 
> 
> 
>   __
> 
>   OpenStack Development Mailing List (not for usage questions)
>   Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
>   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-
> dev
> 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] First Vitrage Pike release by the end of this week

2017-06-21 Thread Yujun Zhang (ZTE)
Is it done yet?

How to fetch the released version for downstream developing?

On Tue, Jun 6, 2017 at 2:17 PM Afek, Ifat (Nokia - IL/Kfar Sava) <
ifat.a...@nokia.com> wrote:

> Hi,
>
> Pike-2 milestone is at the end of this week, and although we are not
> working by the milestones model (we are working by a
> cycle-with-intermediary model) we need to have the first Vitrage Pike
> release by the end of this week.
>
> I would like to release vitrage, python-vitrageclient and
> vitrage-dashboard tomorrow. Any objections? Please let me know if you think
> something has to be changed/added before the release.
>
> Also, we need to add release notes for the newly added features. This list
> includes (let me know if I missed something):
>
> vitrage
> • Vitrage ID
> • Support ‘not’ operator in the evaluator templates
> • Performance improvements
> • Support entity equivalences
> • SNMP notifier
>
> python-vitrageclient
> • Multi tenancy support
> • Resources API
>
> vitrage-dashboard
> • Multi tenancy support – Vitrage in admin menu
> • Added ‘search’ option in the entity graph
>
> Please add a release notes file for each of your features (I’ll send an
> explanation in a separate mail), or send me a few lines of the feature’s
> description and I’ll add it.
>
> Thanks,
> Ifat.
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Yujun Zhang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable]requirements] Bootstrapiing requirements-stable-core

2017-06-21 Thread Davanum Srinivas
+1 from me.

On Wed, Jun 21, 2017 at 6:48 PM, Tony Breeds  wrote:
> Hi All,
> Recently it's been clear that we need a requirements-stable team.
> Until npw that's been handled by the release managers and the
> stable-maint-core team.
>
> With the merge of [1] The have the groundwork for that team.  I'd like
> to nominate:
>
>  * dmllr -- Dirk Mueller
>  * prometheanfire -- Matthew Thode
>  * SeanM -- Sean McGinnis
>
> As that initial team.  Each of them has been doing regular reviews on
> stable branches and have shown an understanding of how the stable policy
> applies to the requirements repo.
>
> Yours Tony.
>
> [1] https://review.openstack.org/#/c/470419/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2][drivers][openvswitch] Question

2017-06-21 Thread Kevin Benton
Rules to allow aren't setup until the port is wired and it calls the
functions like this:
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L602-L606

On Wed, Jun 21, 2017 at 4:49 PM, Margin Hu  wrote:

> Hi Guys,
>
> I have a question in setup_physical_bridges funtion  of
> neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py
>
>  # block all untranslated traffic between bridges
> self.int_br.drop_port(in_port=int_ofport)
> br.drop_port(in_port=phys_ofport)
>
> [refer](https://github.com/openstack/neutron/blob/master/neu
> tron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L1159)
>
> when permit traffic between bridges ?  when modify flow table of ovs
> bridge?
>
>
>
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-21 Thread Chris Hoge

> On Jun 21, 2017, at 2:35 PM, Jeremy Stanley  wrote:
> 
> On 2017-06-21 13:52:11 -0500 (-0500), Lauren Sell wrote:
> [...]
>> To make this actionable...Github is just a mirror of our
>> repositories, but for better or worse it's the way most people in
>> the world explore software. If you look at OpenStack on Github
>> now, it’s impossible to tell which projects are official. Maybe we
>> could help by better curating the Github projects (pinning some of
>> the top projects, using the new new topics feature to put tags
>> like openstack-official or openstack-unofficial, coming up with
>> more standard descriptions or naming, etc.).
> 
> I hadn't noticed the pinned repositories option until you mentioned
> it: appears they just extended that feature to orgs back in October
> (and introduced the topics feature in January). I could see
> potentially integrating pinning and topic management into the
> current GH API script we run when creating new mirrors
> there--assuming these are accessible via their API anyway--and yes
> normalizing the descriptions to something less freeform is something
> else we'd discussed to be able to drive users back to the official
> locations for repositories (or perhaps to the project navigator).
> 
> I've already made recent attempts to clarify our use of GH in the
> org descriptions and linked the openstack org back to the project
> navigator too, since those were easy enough to do right off the bat.
> 
>> Same goes for our repos…if there’s a way we could differentiate
>> between official and unofficial projects on this page it would be
>> really useful: https://git.openstack.org/cgit/openstack/
> 
> I have an idea as to how to go about that by generating custom
> indices rather than relying on the default one cgit provides; I'll
> mull it over.
> 
>> 2) Create a simple structure within the official set of projects
>> to provide focus and a place to get started. The challenge (again
>> to our success, and lots of great work by the community) is that
>> even the official project set is too big for most people to
>> follow.
> 
> This is one of my biggest concerns as well where high-cost (in the
> sense of increasingly valuable Infra team member time) solutions are
> being tossed around to solve the "what's official?" dilemma, while
> not taking into account that the overwhelming majority of active Git
> repositories we're hosting _are_ already deliverables for official
> teams. I strongly doubt that just labelling the minority as
> unofficial will any any way lessen the overall confusion about the
> *more than one thousand* official Git repositories we're
> maintaining.

Another instance where the horse is out of the barn, but this
is one of the reasons why I don’t like it when config-management
style efforts are organized as one-to-one mapping of repositories
to corresponding project. It created massive sprawl
within the ecosystem, limited opportunities for code sharing,
and made refactoring a nightmare. I lost count of the number
of times we submitted n inconsistent patches to change
similar behavior across n+1 projects. Trying to build a library
helped but was never as powerful as being able to target a
single repository.

>> While I fully admit it was an imperfect system, the three tier
>> delineation of “integrated," “incubated" and “stackforge" was
>> something folks could follow pretty easily. The tagging and
>> mapping is valuable and provides additional detail, but having the
>> three clear buckets is ideal.  I would like to see us adopt a
>> similar system, even if the names change (i.e. core infrastructure
>> services, optional services, stackforge). Happy to throw out ideas
>> if there is interest.
> [...]
> 
> Nearly none (almost certainly only a single-digit percentage anyway)
> of the Git repositories we host are themselves source code for
> persistent network services. We have lots of tools, reusable
> libraries, documentation, meta-documentation, test harnesses,
> configuration management frameworks, plugins... we probably need a
> way to reroute audiences who are not strictly interested in browsing
> source code itself so they stop looking at those Git repositories or
> else confusion is imminent regardless. As a community we do nearly
> _everything_ in Git, far beyond mere application and service
> software.
> 
> The other logical disconnect I'm seeing is that our governance is
> formed around teams, not around software. Trying to explain the
> software through the lens of governance is almost certain to confuse
> newcomers. Because we use one term (OpenStack!) for both the
> community of contributors and the software they produce, it's going
> to become very tangled in people's minds. I'm starting to strongly
> wish could use entirely different names for the community and the
> software, but that train has probably already sailed

Two points: 
1) Block That Metaphor!
2) You’ve convinced me that the existing tooling around our current
state 

[openstack-dev] [stable]requirements] Bootstrapiing requirements-stable-core

2017-06-21 Thread Tony Breeds
Hi All,
Recently it's been clear that we need a requirements-stable team.
Until npw that's been handled by the release managers and the
stable-maint-core team.

With the merge of [1] The have the groundwork for that team.  I'd like
to nominate:

 * dmllr -- Dirk Mueller
 * prometheanfire -- Matthew Thode
 * SeanM -- Sean McGinnis

As that initial team.  Each of them has been doing regular reviews on
stable branches and have shown an understanding of how the stable policy
applies to the requirements repo.

Yours Tony.

[1] https://review.openstack.org/#/c/470419/


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Turning TC/UC workgroups into OpenStack SIGs

2017-06-21 Thread Zhipeng Huang
to put my public cloud wg hat on, I think the lack of coordination is a
common problems among OpenStack WGs but I think transitioning WGs to SIGs
will help solving the issue alone.

I think from my observations from existing WGs, the mechanism that are now
in place works great, we have many productive output coming out of WGs.

The core problem here is how WG chairs establish a working method with PTLs
of related project each cycle, so that both side understand each other and
important matters getting solved in the near cycles. The dev circles and
non-dev circles are fairly isolated at the moment.

Merely rebranding WGs won't solve this core problem, I would recommend the
following actions:

1. In addition to the current TC/UC/Projects/WG mechanism, allow people to
establish adhoc SIGs without any procedure overhead (getting approved by
any entity). Folks in the spontaneously established SIGs could find their
best way to get their requirement done. We could have an overall wiki page
for collection/registration of all the SIGs created.

2. Enhance dev/non-dev comms. I doubt more meetings will be the solution.

a. I would suggest projects when doing their planning at Forum or PTG,
always leave a spot for requirement from WGs. And WG chairs should
participate this dev meetings if their WG has done related work.
b. Moreover the foundation could start promotion of project/WG
collaboration best practices, or even specify in the release document that
certain feature are based upon feedback from a certain WGs.

c. WG should have cycle-based releases of works so that they got a sense of
timing, no lost in a permanent discussion mode for issues.


My 2 cents

On Thu, Jun 22, 2017 at 1:33 AM, Lance Bragstad  wrote:

>
>
> On 06/21/2017 11:55 AM, Matt Riedemann wrote:
> > On 6/21/2017 11:17 AM, Shamail Tahir wrote:
> >>
> >>
> >> On Wed, Jun 21, 2017 at 12:02 PM, Thierry Carrez
> >> > wrote:
> >>
> >> Shamail Tahir wrote:
> >> > In the past, governance has helped (on the UC WG side) to reduce
> >> > overlaps/duplication in WGs chartered for similar objectives. I
> >> would
> >> > like to understand how we will handle this (if at all) with the
> >> new SIG
> >> > proposa?
> >>
> >> I tend to think that any overlap/duplication would get solved
> >> naturally,
> >> without having to force everyone through an application process
> >> that may
> >> discourage natural emergence of such groups. I feel like an
> >> application
> >> process would be premature optimization. We can always encourage
> >> groups
> >> to merge (or clean them up) after the fact. How much
> >> overlaps/duplicative groups did you end up having ?
> >>
> >>
> >> Fair point, it wasn't many. The reason I recalled this effort was
> >> because we had to go through the exercise after the fact and that
> >> made the volume of WGs to review much larger than had we asked the
> >> purpose whenever they were created. As long as we check back
> >> periodically and not let the work for validation/clean up pile up
> >> then this is probably a non-issue.
> >>
> >>
> >> > Also, do we have to replace WGs as a concept or could SIG
> >> > augment them? One suggestion I have would be to keep projects
> >> on the TC
> >> > side and WGs on the UC side and then allow for
> >> spin-up/spin-down of SIGs
> >> > as needed for accomplishing specific goals/tasks (picture of a
> >> diagram
> >> > I created at the Forum[1]).
> >>
> >> I feel like most groups should be inclusive of all community, so I'd
> >> rather see the SIGs being the default, and ops-specific or
> >> dev-specific
> >> groups the exception. To come back to my Public Cloud WG example,
> >> you
> >> need to have devs and ops in the same group in the first place
> >> before
> >> you would spin-up a "address scalability" SIG. Why not just have a
> >> Public Cloud SIG in the first place?
> >>
> >>
> >> +1, I interpreted originally that each use-case would be a SIG versus
> >> the SIG being able to be segment oriented (in which multiple
> >> use-cases could be pursued)
> >>
> >>
> >>  > [...]
> >> > Finally, how will this change impact the ATC/AUC status of the SIG
> >> > members for voting rights in the TC/UC elections?
> >>
> >> There are various options. Currently you give UC WG leads the AUC
> >> status. We could give any SIG lead both statuses. Or only give
> >> the AUC
> >> status to a subset of SIGs that the UC deems appropriate. It's
> >> really an
> >> implementation detail imho. (Also I would expect any SIG lead to
> >> already
> >> be both AUC and ATC somehow anyway, so that may be a non-issue).
> >>
> >>
> >> We can discuss this later because it really is an implementation
> >> detail. Thanks for the answers.
> >>
> >>
> >> --
> >> Thierry Carrez (ttx)
> >>
> >>
> >> 

[openstack-dev] [neutron][ml2][drivers][openvswitch] Question

2017-06-21 Thread Margin Hu

Hi Guys,

I have a question in setup_physical_bridges funtion  of 
neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py


 # block all untranslated traffic between bridges
self.int_br.drop_port(in_port=int_ofport)
br.drop_port(in_port=phys_ofport)

[refer](https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L1159)

when permit traffic between bridges ?  when modify flow table of ovs 
bridge?










__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FEMDC][MassivelyDistributed] Strawman proposal for message bus analysis

2017-06-21 Thread Matthieu Simonin
Hi Ken,

Thanks for starting this !
I've made a first pass on the epad and left some notes and questions there.

Best,

Matthieu
- Mail original -
> De: "Ken Giusti" 
> À: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Envoyé: Mercredi 21 Juin 2017 15:23:26
> Objet: [openstack-dev] [FEMDC][MassivelyDistributed] Strawman proposal for
> message bus analysis
> 
> Hi All,
> 
> Andy and I have taken a stab at defining some test scenarios for anal the
> different message bus technologies:
> 
> https://etherpad.openstack.org/p/1BGhFHDIoi
> 
> We've started with tests for just the oslo.messaging layer to analyze
> throughput and latency as the number of message bus clients - and the bus
> itself - scale out.
> 
> The next step will be to define messaging oriented test scenarios for an
> openstack deployment.  We've started by enumerating a few of the tools,
> topologies, and fault conditions that need to be covered.
> 
> Let's use this epad as a starting point for analyzing messaging - please
> feel free to contribute, question, and criticize :)
> 
> thanks,
> 
> 
> 
> --
> Ken Giusti  (kgiu...@gmail.com)
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Do we still support core plugin not based on the ML2 framework?

2017-06-21 Thread Armando M.
On 21 June 2017 at 17:40, Édouard Thuleau  wrote:

> Hi,
>
> @Chaoyi,
> I don't want to change the core plugin interface. But I'm not sure we
> are talking about the same interface. I had a very quick look into the
> tricycle code and I think it uses the NeutronDbPluginV2 interface [1]
> which implements the Neutron DB model. Our Contrail Neutron plugin
> implements the NeutronPluginBaseV2 interface [2]. Anyway,
> NeutronDbPluginV2 is inheriting from NeutronPluginBaseV2 [3].
> Thanks for the pointer to the stadium paragraph.
>

Is there any bug report that captures the actual error you're facing? Out
of the list of plugins that have been added to that list over time, most
work just exercising the core plugin API, and we can look into the ones
that don't to figure out whether we overlooked some design abstractions
during code review.


>
> @Kevin,
> Service plugins loaded by default are defined in a contant list [4]
> and I don't see how I can remove a default service plugin to be loaded
> [5].
>
> [1] https://github.com/openstack/tricircle/blob/master/
> tricircle/network/central_plugin.py#L128
> [2] https://github.com/Juniper/contrail-neutron-plugin/blob/
> master/neutron_plugin_contrail/plugins/opencontrail/
> contrail_plugin_base.py#L113
> [3] https://github.com/openstack/neutron/blob/master/neutron/
> db/db_base_plugin_v2.py#L125
> [4] https://github.com/openstack/neutron/blob/master/neutron/
> plugins/common/constants.py#L43
> [5] https://github.com/openstack/neutron/blob/master/neutron/
> manager.py#L190
>
> Édouard.
>
> On Wed, Jun 21, 2017 at 11:22 AM, Kevin Benton  wrote:
> > Why not just delete the service plugins you don't support from the
> default
> > plugins dict?
> >
> > On Wed, Jun 21, 2017 at 1:45 AM, Édouard Thuleau <
> edouard.thul...@gmail.com>
> > wrote:
> >>
> >> Ok, we would like to help on that. How we can start?
> >>
> >> I think the issue I raise in that thread must be the first point to
> >> address and my second proposition seems to be the correct one. What do
> >> you think?
> >> But it will needs some time and not sure we'll be able to fix all
> >> service plugins loaded by default before the next Pike release.
> >>
> >> I like to propose a workaround until all default service plugins will
> >> be compatible with non-DB core plugins. We can continue to load that
> >> default service plugins list but authorizing a core plugin to disable
> >> it completely with a private attribut on the core plugin class like
> >> it's done for bulk/pagination/sorting operations.
> >>
> >> Of course, we need to add the ability to report any regression on
> >> that. I think unit tests will help and we can also work on a
> >> functional test based on a fake non-DB core plugin.
> >>
> >> Regards,
> >> Édouard.
> >>
> >> On Tue, Jun 20, 2017 at 12:09 AM, Kevin Benton 
> wrote:
> >> > The issue is mainly developer resources. Everyone currently working
> >> > upstream
> >> > doesn't have the bandwidth to keep adding/reviewing the layers of
> >> > interfaces
> >> > to make the DB optional that go untested. (None of the projects that
> >> > would
> >> > use them run a CI system that reports results on Neutron patches.)
> >> >
> >> > I think we can certainly accept patches to do the things you are
> >> > proposing,
> >> > but there is no guarantee that it won't regress to being DB-dependent
> >> > until
> >> > there is something reporting results back telling us when it breaks.
> >> >
> >> > So it's not that the community is against non-DB core plugins, it's
> just
> >> > that the people developing those plugins don't participate in the
> >> > community
> >> > to ensure they work.
> >> >
> >> > Cheers
> >> >
> >> >
> >> > On Mon, Jun 19, 2017 at 2:15 AM, Édouard Thuleau
> >> > 
> >> > wrote:
> >> >>
> >> >> Oops, sent too fast, sorry. I try again.
> >> >>
> >> >> Hi,
> >> >>
> >> >> Since Mitaka release, a default service plugins list is loaded when
> >> >> Neutron
> >> >> server starts [1]. That list is not editable and was extended with
> few
> >> >> services
> >> >> [2]. But all of them rely on the Neutron DB model.
> >> >>
> >> >> If a core driver is not based on the ML2 core plugin framework or not
> >> >> based on
> >> >> the 'neutron.db.models_v2' class, all that service plugins will not
> >> >> work.
> >> >>
> >> >> So my first question is Does Neutron still support core plugin not
> >> >> based
> >> >> on ML2
> >> >> or 'neutron.db.models_v2' class?
> >> >>
> >> >> If yes, I would like to propose two solutions:
> >> >> - permits core plugin to overload the service plugin class by it's
> own
> >> >> implementation and continuing to use the actual Neutron db based
> >> >> services
> >> >> as
> >> >> default.
> >> >> - modifying all default plugin service to use service plugin driver
> >> >> framework [3], and set the actual Neutron db based implementation as
> >> >> default driver for services. That permits 

Re: [openstack-dev] [tripleo] Role updates

2017-06-21 Thread Emilien Macchi
On Wed, Jun 14, 2017 at 6:24 PM, Alex Schultz  wrote:
> On Tue, Jun 13, 2017 at 11:11 AM, Alex Schultz  wrote:
>> On Tue, Jun 13, 2017 at 6:58 AM, Dan Prince  wrote:
>>> On Fri, 2017-06-09 at 09:24 -0600, Alex Schultz wrote:
 Hey folks,

 I wanted to bring to your attention that we've merged the change[0]
 to
 add a basic set of roles that can be combined to create your own
 roles_data.yaml as needed.  With this change the roles_data.yaml and
 roles_data_undercloud.yaml files in THT should not be changed by
 hand.
>>>
>>> In general I like the feature.
>>>
>>> I added some comments to your validations [1] patch below. We need
>>> those validations, but I think we need to carefully consider adding a
>>> hard dependency on python-tripleoclient simply to have validations in
>>> tree. Wondering if perhaps a t-h-t-utils library project might be in
>>> order here to contain routines we use in t-h-t and in higher level
>>> workflow tools in Mistral and on the CLI? This might also make the
>>> tools/process-templates.py stuff cleaner as well.
>>>
>>> Thoughts?
>>
>> So my original implementation of the roles stuff included a standalone
>> script in THT to generate the roles_data.yaml files.  This was -1'd as
>> realistically the actions for managing this should probably live
>> within python-tripleoclient.  This made sense to me as that's how the
>> end user really should be interacting with these things.  Given that
>> the tripleoclient and the UI are the two ways and operator is going to
>> consume with THT I think there is already an undocumented requirement
>> that should be there.
>>
>> An alternative would be to move the roles generation items into
>> tripleo-common but then we would have to write two distinct ways of
>> then executing this code. One being tripleoclient and the other being
>> a standalone script which basically would have to reinvent the
>> interface provided by tripleoclient/openstackclient.  Since we're not
>> allowing folks to dynamically construct the roles_data.yaml as part of
>> the overcloud deployment yet, I'm not sure we should try and move this
>> around further unless there's an agreed upon way we want to handle
>> this.
>>
>> I think the better work would be to split the
>> tripleoclient/instack-undercloud dependency which is really where the
>> problem lies.  We shouldn't be pulling in the world for tripleoclient
>> if we are just going to operate on only the overcloud.
>
> As a follow up, I've taken some time to move the roles functions in to
> tripleo-common[0] and out of tripleoclient[1]. With this, I've also
> updated the validation patch with a small python script that leverages
> the tripleo-common work.

I would like to propose that once we have
https://review.openstack.org/#/c/472731/ in place, we would force our
users to generate roles_data.yaml instead of providing a default file.
One direct benefit for that would be to solve this kind of case:
https://bugs.launchpad.net/tripleo/+bug/1671859 (and possibly decrease
deployment time when using custom roles).

One way to do it could be:

Pike
- Validate roles data with https://review.openstack.org/#/c/472731/
- Make CLI generating by default the current default config (when not
using custom roles) (if not the case already)
- Document the CLI (in progress by Alex) and deprecate the roles_data
file, make it clear in the documentation for our users.

Queens
- Remove the default roles data from THT
- Make the roles data file management as required in the doc

Does it make sense?

> Of course while writing this email I noticed that tripleo-common also
> pulls in instack-undercloud[3] like tripleoclient[4] so I'm not sure
> this is actually an improvement.  ¯\_(ツ)_/¯
>
> Thanks,
> -Alex
>
> [0] https://review.openstack.org/#/c/474332/
> [1] https://review.openstack.org/#/c/474343/
> [2] https://review.openstack.org/#/c/472731/
> [3] 
> https://github.com/rdo-packages/tripleo-common-distgit/blob/rpm-master/openstack-tripleo-common.spec#L21
> [4] 
> https://github.com/rdo-packages/tripleoclient-distgit/blob/rpm-master/python-tripleoclient.spec#L36
>
>>
>> Thanks,
>> -Alex
>>
>>>
>>> Dan
>>>
 Instead if you have an update to a role, please update the
 appropriate
 roles/*.yaml file. I have proposed a change[1] to THT with additional
 tools to validate that the roles/*.yaml files are updated and that
 there are no unaccounted for roles_data.yaml changes.  Additionally
 this change adds in a new tox target to assist in the generate of
 these basic roles data files that we provide.

 Ideally I would like to get rid of the roles_data.yaml and
 roles_data_undercloud.yaml so that the end user doesn't have to
 generate this file at all but that won't happen this cycle.  In the
 mean time, additional documentation around how to work with roles has
 been added to the roles README[2].

 

Re: [openstack-dev] diskimage builder works for trusty but not for xenial

2017-06-21 Thread Ian Wienand

On 06/21/2017 04:44 PM, Ignazio Cassano wrote:

* Connection #0 to host cloud-images.ubuntu.com left intact
Downloaded and cached
http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-root.tar.gz,
having forced upstream caches to revalidate
xenial-server-cloudimg-amd64-root.tar.gz: FAILED
sha256sum: WARNING: 1 computed checksum did NOT match



Are there any problems on http://cloud-images.ubuntu.com ?


There was [1] which is apparently fixed.

As Paul mentioned, the -minimal builds take a different approach and
build the image from debootstrap, rather than modifying the upstream
image.  They are generally well tested just as a side-effect of infra
relying on them daily.  You can use DIB_DISTRIBUTION_MIRROR to set
that to a local mirror and eliminate another source of instability
(however, that leaves the mirror in the final image ... a known issue.
Contributions welcome :)

-i

[1] https://bugs.launchpad.net/cloud-images/+bug/1699396


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Do we still support core plugin not based on the ML2 framework?

2017-06-21 Thread Armando M.
On 20 June 2017 at 00:09, Kevin Benton  wrote:

> The issue is mainly developer resources. Everyone currently working
> upstream doesn't have the bandwidth to keep adding/reviewing the layers of
> interfaces to make the DB optional that go untested. (None of the projects
> that would use them run a CI system that reports results on Neutron
> patches.)
>
> I think we can certainly accept patches to do the things you are
> proposing, but there is no guarantee that it won't regress to being
> DB-dependent until there is something reporting results back telling us
> when it breaks.
>
> So it's not that the community is against non-DB core plugins, it's just
> that the people developing those plugins don't participate in the community
> to ensure they work.
>
> Cheers
>

> On Mon, Jun 19, 2017 at 2:15 AM, Édouard Thuleau <
> edouard.thul...@gmail.com> wrote:
>
>> Oops, sent too fast, sorry. I try again.
>>
>> Hi,
>>
>> Since Mitaka release, a default service plugins list is loaded when
>> Neutron
>> server starts [1]. That list is not editable and was extended with few
>> services
>> [2]. But all of them rely on the Neutron DB model.
>>
>> If a core driver is not based on the ML2 core plugin framework or not
>> based on
>> the 'neutron.db.models_v2' class, all that service plugins will not work.
>>
>
>
>> So my first question is Does Neutron still support core plugin not based
>> on ML2
>> or 'neutron.db.models_v2' class?
>>
>> If yes, I would like to propose two solutions:
>> - permits core plugin to overload the service plugin class by it's own
>> implementation and continuing to use the actual Neutron db based services
>> as
>> default.
>> - modifying all default plugin service to use service plugin driver
>> framework [3], and set the actual Neutron db based implementation as
>> default driver for services. That permits to core drivers not based on the
>> Neutron DB to specify a driver. We can see that solution was adopted in
>> the
>> networking-bgpvpn project, where can find two abstract driver classes,
>> one for
>> core driver based on Neutron DB model [4] and one used by core driver not
>> based
>> on the DB [5] as the Contrail driver [6].
>>
>
I think we're missing the the fundamental premise behind the introduction
of this map, which is that the addition of plugins to this list is done
only on the basis of the fact that the plugin being added introduces
functionality that's orthogonal to core plugins, and as such should work
with any, and not depend on internals of the core plugin implementation.

If something does break, it should be treated as a bug, rather than
allowing overloading this, because that's goes against the main rationale
for it being hard-coded.


>
>> [1] https://github.com/openstack/neutron/commit/aadf2f30f84dff3d
>> 85f380a7ff4e16dbbb0c6bb0#diff-9169a6595980d19b2649d5bedfff05ce
>> [2] https://github.com/openstack/neutron/blob/master/neutron/plu
>> gins/common/constants.py#L43
>> [3] https://github.com/openstack/neutron/blob/master/neutron/ser
>> vices/service_base.py#L27
>> [4] https://github.com/openstack/networking-bgpvpn/blob/master/n
>> etworking_bgpvpn/neutron/services/service_drivers/driver_api.py#L226
>> [5] https://github.com/openstack/networking-bgpvpn/blob/master/n
>> etworking_bgpvpn/neutron/services/service_drivers/driver_api.py#L23
>> [6] https://github.com/Juniper/contrail-neutron-plugin/blob/mast
>> er/neutron_plugin_contrail/plugins/opencontrail/networking_
>> bgpvpn/contrail.py#L36
>>
>> Regards,
>> Édouard.
>>
>> On Mon, Jun 19, 2017 at 10:47 AM, Édouard Thuleau
>>  wrote:
>> > Hi,
>> > Since Mitaka release [1], a default service plugins list is loaded
>> > when Neutron server starts. That list is not editable and was extended
>> > with few services [2]. But none of th
>> >
>> > [1] https://github.com/openstack/neutron/commit/aadf2f30f84dff3d
>> 85f380a7ff4e16dbbb0c6bb0#diff-9169a6595980d19b2649d5bedfff05ce
>> > [2] https://github.com/openstack/neutron/blob/master/neutron/plu
>> gins/common/constants.py#L43
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-21 Thread Jeremy Stanley
On 2017-06-21 16:27:14 -0400 (-0400), Sean Dague wrote:
[...]
> I'd still also like to see us using that structure well, and
> mirroring only things we tag as official to github.com/openstack,
> and the rest to /openstack-ecosystem or something.
[...]

I can understand the sentiment, but we'd need to completely abandon
the replication mechanism provided by Gerrit and implement our own
separate solution to do any conditional filtering or shuffling of
different repos between namespaces. It's not something I expect
we'll find volunteers chomping at the bit to write, so it might make
more sense to stop having the Infra team manage GH mirroring at all
and let someone else in the community manually curate and
periodically update what goes there instead if they want to be
responsible for that.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-21 Thread Jeremy Stanley
On 2017-06-21 13:52:11 -0500 (-0500), Lauren Sell wrote:
[...]
> To make this actionable...Github is just a mirror of our
> repositories, but for better or worse it's the way most people in
> the world explore software. If you look at OpenStack on Github
> now, it’s impossible to tell which projects are official. Maybe we
> could help by better curating the Github projects (pinning some of
> the top projects, using the new new topics feature to put tags
> like openstack-official or openstack-unofficial, coming up with
> more standard descriptions or naming, etc.).

I hadn't noticed the pinned repositories option until you mentioned
it: appears they just extended that feature to orgs back in October
(and introduced the topics feature in January). I could see
potentially integrating pinning and topic management into the
current GH API script we run when creating new mirrors
there--assuming these are accessible via their API anyway--and yes
normalizing the descriptions to something less freeform is something
else we'd discussed to be able to drive users back to the official
locations for repositories (or perhaps to the project navigator).

I've already made recent attempts to clarify our use of GH in the
org descriptions and linked the openstack org back to the project
navigator too, since those were easy enough to do right off the bat.

> Same goes for our repos…if there’s a way we could differentiate
> between official and unofficial projects on this page it would be
> really useful: https://git.openstack.org/cgit/openstack/

I have an idea as to how to go about that by generating custom
indices rather than relying on the default one cgit provides; I'll
mull it over.

> 2) Create a simple structure within the official set of projects
> to provide focus and a place to get started. The challenge (again
> to our success, and lots of great work by the community) is that
> even the official project set is too big for most people to
> follow.

This is one of my biggest concerns as well where high-cost (in the
sense of increasingly valuable Infra team member time) solutions are
being tossed around to solve the "what's official?" dilemma, while
not taking into account that the overwhelming majority of active Git
repositories we're hosting _are_ already deliverables for official
teams. I strongly doubt that just labelling the minority as
unofficial will any any way lessen the overall confusion about the
*more than one thousand* official Git repositories we're
maintaining.

> While I fully admit it was an imperfect system, the three tier
> delineation of “integrated," “incubated" and “stackforge" was
> something folks could follow pretty easily. The tagging and
> mapping is valuable and provides additional detail, but having the
> three clear buckets is ideal.  I would like to see us adopt a
> similar system, even if the names change (i.e. core infrastructure
> services, optional services, stackforge). Happy to throw out ideas
> if there is interest.
[...]

Nearly none (almost certainly only a single-digit percentage anyway)
of the Git repositories we host are themselves source code for
persistent network services. We have lots of tools, reusable
libraries, documentation, meta-documentation, test harnesses,
configuration management frameworks, plugins... we probably need a
way to reroute audiences who are not strictly interested in browsing
source code itself so they stop looking at those Git repositories or
else confusion is imminent regardless. As a community we do nearly
_everything_ in Git, far beyond mere application and service
software.

The other logical disconnect I'm seeing is that our governance is
formed around teams, not around software. Trying to explain the
software through the lens of governance is almost certain to confuse
newcomers. Because we use one term (OpenStack!) for both the
community of contributors and the software they produce, it's going
to become very tangled in people's minds. I'm starting to strongly
wish could use entirely different names for the community and the
software, but that train has probably already sailed (and could
result in different confusion all its own too, I suppose).
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] Add API Sample tests for Placement APIs

2017-06-21 Thread Rochelle Grober


> From: Matt 
> On 6/21/2017 7:04 AM, Shewale, Bhagyashri wrote:
> > I  would like to write functional tests to check the exact req/resp
> > for each placement API for all supported versions similar
> >
> > to what is already done for other APIs under
> > nova/tests/functional/api_sample_tests/api_samples/*.
> >
> > These request/response json samples can be used by the
> > api.openstack.org and in the manuals.
> >
> > There are already functional tests written for placement APIs under
> > nova/tests/functional/api/openstack/placement,
> >
> > but these tests doesn’t check the entire HTTP response for each API
> > for all supported versions.
> >
> > I think adding such functional tests for checking response for each
> > placement API would be beneficial to the project.
> >
> > If there is an interest to create such functional tests, I can file a
> > new blueprint for this activity.
> >
> 
> This has come up before and we don't want to use the same functional API
> samples infrastructure for generating API samples for the placement API.
> The functional API samples tests are confusing and a steep learning curve for
> new contributors (and even long time old tooth contributors still get
> confused by them).

I second that you talk with Chris Dent (mentioned below), but I also want to 
encourage you to write tests.  Write API tests that demonstrate *exactly* what 
is allowed and not allowed and verify that whether the api call is constructed 
correctly or not, that the responses are appropriate and correct.  By writing 
these new/extra/improved tests, the Interop guidelines can use these tests to 
improve interop expectations across clouds.  Plus, operators will be able to 
more quickly identify what the problem is when the tests demonstrate the 
problem-response patterns.  And, like you said, knowing what to expect makes 
documenting expected behaviors, for both correct and incorrect uses, much more 
straightforward.  Details are very important when tracking down issues based on 
the responses logged.

I want to encourage you to work with Chris to help expand our tests and their 
specificity and their extent.

Thanks!

--Rocky (with my interop, QA and ops hats on)



> Talk with Chris Dent about ideas here for API samples with placement.
> He's talked about building something into the gabbi library for this, but I 
> don't
> know if that's being worked on or not.
> 
> Chris is also on vacation for a couple of weeks, just FYI.
> 
> --
> 
> Thanks,
> 
> Matt
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][fuel] Making Fuel a hosted project

2017-06-21 Thread Davanum Srinivas
Vova,

I really hope and wish for a reboot!.

Please do note that the change proposed is only just a governance repo
change. There is no one here who has proposed any retiring of the fuel
repositories (process for retirement is here - [1]).

Thanks,
Dims

[1] https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project

On Wed, Jun 21, 2017 at 4:34 PM, Vladimir Kuklin  wrote:
> Folks, I sent a reply a couple of days ago, but somehow it got lost. The
> original message goes below
>
> Folks
>
> It is essentially true that Fuel is no longer being developed as almost 99%
> of people have left the project and are working on something else. May be,
> in the future, when the dust settles, we can resume working on it, but the
> probability is not so high as of now.
>
> I would like to thank everyone who worked on the project - contributors,
> reviewers, core-reviewers, ex-PTLs Alex Shtokolov, Vladimir Kozhukalov and
> Dmitry Borodaenko - it was a pleasure to work with you guys.
>
> Also, I would like to thank puppet-openstack project team as we worked
> together on many things really effectively and wish them good luck as well.
>
> Special Kudos to Jay and Dims as they helped as a lot on governance and
> community side.
>
> I hope, we will work some day together again.
>
> At the same time, I would like to mention that Fuel is still being actively
> used and some bugs are still being fixed, so I would suggest, if that is
> possible, that we keep the github repository available for a while, so that
> those guys can still access the repositories.
>
> Having that said, I do not have any other objections on making Fuel Hosted
> project.
>
>
> Yours Faithfully
>
> Vladimir Kuklin
>
> email: ag...@aglar.ru
> email(alt.): aglaren...@gmail.com
> mob.: +79267023968
> mob.: (when in EU) +393497028541
> mob.: (when in US) +19293122331
> skype: kuklinvv
> telegram



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][fuel] Making Fuel a hosted project

2017-06-21 Thread Vladimir Kuklin
Folks, I sent a reply a couple of days ago, but somehow it got lost. The
original message goes below

Folks

It is essentially true that Fuel is no longer being developed as almost 99%
of people have left the project and are working on something else. May be,
in the future, when the dust settles, we can resume working on it, but the
probability is not so high as of now.

I would like to thank everyone who worked on the project - contributors,
reviewers, core-reviewers, ex-PTLs Alex Shtokolov, Vladimir Kozhukalov and
Dmitry Borodaenko - it was a pleasure to work with you guys.

Also, I would like to thank puppet-openstack project team as we worked
together on many things really effectively and wish them good luck as well.

Special Kudos to Jay and Dims as they helped as a lot on governance and
community side.

I hope, we will work some day together again.

At the same time, I would like to mention that Fuel is still being actively
used and some bugs are still being fixed, so I would suggest, if that is
possible, that we keep the github repository available for a while, so that
those guys can still access the repositories.

Having that said, I do not have any other objections on making Fuel Hosted
project.


Yours Faithfully

Vladimir Kuklin

email: ag...@aglar.ru
email(alt.): aglaren...@gmail.com
mob.: +79267023968
mob.: (when in EU) +393497028541
mob.: (when in US) +19293122331
skype: kuklinvv
telegram
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-21 Thread Sean Dague
On 06/21/2017 02:52 PM, Lauren Sell wrote:
> Two things we should address:
> 
> 1) Make it more clear which projects are “officially” part of
> OpenStack. It’s possible to find that information, but it’s not obvious.
> I am one of the people who laments the demise of stackforge…it was very
> clear that stackforge projects were not official, but part of the
> OpenStack ecosystem. I wish it could be resurrected, but I know that’s
> impractical. 
> 
> To make this actionable...Github is just a mirror of our repositories,
> but for better or worse it's the way most people in the world
> explore software. If you look at OpenStack on Github now, it’s
> impossible to tell which projects are official. Maybe we could help by
> better curating the Github projects (pinning some of the top projects,
> using the new new topics feature to put tags like openstack-official or
> openstack-unofficial, coming up with more standard descriptions or
> naming, etc.). Same goes for our repos…if there’s a way we could
> differentiate between official and unofficial projects on this page it
> would be really useful: https://git.openstack.org/cgit/openstack/

I think even if it was only solvable on github, and not cgit, it would
help a lot. The idea of using github project tags and pinning suggested
by Lauren seems great to me.

If we replicated the pinning on github.com/openstack to "popular
projects" here - https://www.openstack.org/software/, and then even just
start with the tags as defined in governance -
https://governance.openstack.org/tc/reference/tags/index.html it would
go a long way.

I think where the conversation is breaking down is realizing that
different people process the information we put out there in different
ways, and different things lock in and make sense to them. Lots of
people are trained to perceive github structure as meaningful, because
it is 98% of the time. As such I'd still also like to see us using that
structure well, and mirroring only things we tag as official to
github.com/openstack, and the rest to /openstack-ecosystem or something.

Even if that's flat inside our gerrit and cgit environment.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Sydney Summit Call for Presentations Open Until July 14th

2017-06-21 Thread Erin Disney
Hi everyone,
 
You have less than a month to submit your speaking proposals for the OpenStack 
Summit Sydney , November 6-8, 
2017. Submit your proposals here 

 by July 14, 11:59PM Pacific Time (July 15, 2017 at 6:59 UTC)!
 
There is only ONE week left to nominate yourself or a colleague to be a track 
chair. Learn more about the selection process here, 

 and submit your nomination 

 by June 27.
 
Important Links:
Registration  - make sure to 
register now before the prices increase in early September
Hotels 
Travel Support Program 

Visa Information 
Please note that all non-Australian residents will need a visa to enter 
Australia. This includes United States citizens.
Sponsorship  - deadline 
to sign the sponsor contract is September 27

If you have any general Summit questions, contact us at sum...@openstack.org 
.
 
Cheers,
Erin

Erin Disney
OpenStack Marketing
e...@openstack.org 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-21 Thread Amrith Kumar
Thank you Kevin. Lots of container (specific?) goodness here.

-amrith


-amrith

--
Amrith Kumar
Phone: +1-978-563-9590


On Mon, Jun 19, 2017 at 2:34 PM, Fox, Kevin M  wrote:

> Thanks for starting this difficult discussion.
>
> I think I agree with all the lessons learned except  the nova one. while
> you can treat containers and vm's the same, after years of using both, I
> really don't think its a good idea to treat them equally. Containers can't
> work properly if used as a vm. (really, really.)
>
> I agree whole heartedly with your statement that its mostly an
> orchestration problem and should reuse stuff now that there are options.
>
> I would propose the following that I think meets your goals and could
> widen your contributor base substantially:
>
> Look at the Kubernetes (k8s) concept of Operator ->
> https://coreos.com/blog/introducing-operators.html
>
> They allow application specific logic to be added to Kubernetes while
> reusing the rest of k8s to do what its good at. Container Orchestration.
> etcd is just a clustered database and if the operator concept works for it,
> it should also work for other databases such as Gallera.
>
> Where I think the containers/vm thing is incompatible is the thing I think
> will make Trove's life easier. You can think of a member of the database as
> few different components, such as:
>  * main database process
>  * metrics gatherer (such as https://github.com/prometheus/mysqld_exporter
> )
>  * trove_guest_agent
>
> With the current approach, all are mixed into the same vm image, making it
> very difficult to update the trove_guest_agent without touching the main
> database process. (needed when you upgrade the trove controllers). With the
> k8s sidecar concept, each would be a separate container loaded into the
> same pod.
>
> So rather then needing to maintain a trove image for every possible
> combination of db version, trove version, etc, you can reuse upstream
> database containers along with trove provided guest agents.
>
> There's a secure channel between kube-apiserver and kubelet so you can
> reuse it for secure communications. No need to add anything for secure
> communication. trove engine -> kubectl exec x-db -c guest_agent some
> command.
>
> There is k8s federation, so if the operator was started at the federation
> level, it can cross multiple OpenStack regions.
>
> Another big feature I that hasn't been mentioned yet that I think is
> critical. In our performance tests, databases in VM's have never performed
> particularly well. Using k8s as a base, bare metal nodes could be pulled in
> easily, with dedicated disk or ssd's that the pods land on that are very
> very close to the database. This should give native performance.
>
> So, my suggestion would be to strongly consider basing Trove v2 on
> Kubernetes. It can provide a huge bang for the buck, simplifying the Trove
> architecture substantially while gaining the new features your list as
> being important. The Trove v2 OpenStack api can be exposed as a very thin
> wrapper over k8s Third Party Resources (TPR) and would make Trove entirely
> stateless. k8s maintains all state for everything in etcd.
>
> Please consider this architecture.
>
> Thanks,
> Kevin
>
> --
> *From:* Amrith Kumar [amrith.ku...@gmail.com]
> *Sent:* Sunday, June 18, 2017 4:35 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* [openstack-dev] [trove][all][tc] A proposal to rearchitect
> Trove
>
> Trove has evolved rapidly over the past several years, since integration
> in IceHouse when it only supported single instances of a few databases.
> Today it supports a dozen databases including clusters and replication.
>
> The user survey [1] indicates that while there is strong interest in the
> project, there are few large production deployments that are known of (by
> the development team).
>
> Recent changes in the OpenStack community at large (company realignments,
> acquisitions, layoffs) and the Trove community in particular, coupled with
> a mounting burden of technical debt have prompted me to make this proposal
> to re-architect Trove.
>
> This email summarizes several of the issues that face the project, both
> structurally and architecturally. This email does not claim to include a
> detailed specification for what the new Trove would look like, merely the
> recommendation that the community should come together and develop one so
> that the project can be sustainable and useful to those who wish to use it
> in the future.
>
> TL;DR
>
> Trove, with support for a dozen or so databases today, finds itself in a
> bind because there are few developers, and a code-base with a significant
> amount of technical debt.
>
> Some architectural choices which the team made over the years have
> consequences which make the project less than ideal for deployers.
>
> Given that there are no major production deployments of Trove at present,
> 

Re: [openstack-dev] Required Ceph rbd image features

2017-06-21 Thread Fox, Kevin M
Anyone else seen a problem with kernel rbd when ceph isn't fully up when an 
kernel rbd mount is attempted?

the mount blocks as it should, but if ceph takes too long to start, it 
enventually enters a D state forever even though ceph comes up happpy. Its like 
it times out and stops trying. Only a forced reboot will solve it. :/

This a known issue?

Thanks,
Kevin


From: Jason Dillaman [jdill...@redhat.com]
Sent: Wednesday, June 21, 2017 12:07 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Required Ceph rbd image features

On Wed, Jun 21, 2017 at 12:32 PM, Jon Bernard  wrote:
> I suspect you'd want to enable layering at minimum.

I'd agree that layering is probably the most you'd want to enable for
krbd-use cases as of today. The v4.9 kernel added support for
exclusive-lock, but that probably doesn't provide much additional
benefit at this point. The striping v2 feature is still not supported
by krbd for non-basic stripe count/unit settings.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Required Ceph rbd image features

2017-06-21 Thread Jason Dillaman
On Wed, Jun 21, 2017 at 12:32 PM, Jon Bernard  wrote:
> I suspect you'd want to enable layering at minimum.

I'd agree that layering is probably the most you'd want to enable for
krbd-use cases as of today. The v4.9 kernel added support for
exclusive-lock, but that probably doesn't provide much additional
benefit at this point. The striping v2 feature is still not supported
by krbd for non-basic stripe count/unit settings.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-21 Thread Lauren Sell
Several folks on this thread have talked about the different constituencies and 
problems we’re trying to solve with naming. Most of the people following this 
thread understand all of the terminology and governance we’ve defined, but 
that's still a very small percentage of people who care about OpenStack at the 
end of the day. I think we’re trying to communicate to the 99% who have 
relatively low context: potential users, people in other open source 
communities, managers at vendor companies, press/analysts, etc. who really want 
to know what we’re doing, but feel overwhelmed and need a simple structure 
interpret it.

Two things we should address:

1) Make it more clear which projects are “officially” part of OpenStack. It’s 
possible to find that information, but it’s not obvious. I am one of the people 
who laments the demise of stackforge…it was very clear that stackforge projects 
were not official, but part of the OpenStack ecosystem. I wish it could be 
resurrected, but I know that’s impractical. 

To make this actionable...Github is just a mirror of our repositories, but for 
better or worse it's the way most people in the world explore software. If you 
look at OpenStack on Github now, it’s impossible to tell which projects are 
official. Maybe we could help by better curating the Github projects (pinning 
some of the top projects, using the new new topics feature to put tags like 
openstack-official or openstack-unofficial, coming up with more standard 
descriptions or naming, etc.). Same goes for our repos…if there’s a way we 
could differentiate between official and unofficial projects on this page it 
would be really useful: https://git.openstack.org/cgit/openstack/

2) Create a simple structure within the official set of projects to provide 
focus and a place to get started. The challenge (again to our success, and lots 
of great work by the community) is that even the official project set is too 
big for most people to follow. 

While I fully admit it was an imperfect system, the three tier delineation of 
“integrated," “incubated" and “stackforge" was something folks could follow 
pretty easily. The tagging and mapping is valuable and provides additional 
detail, but having the three clear buckets is ideal.  I would like to see us 
adopt a similar system, even if the names change (i.e. core infrastructure 
services, optional services, stackforge). Happy to throw out ideas if there is 
interest.

Thanks,
Lauren


> On Jun 21, 2017, at 11:42 AM, Chris Hoge  wrote:
> 
> 
>> On Jun 21, 2017, at 9:20 AM, Clark Boylan > > wrote:
>> 
>> On Wed, Jun 21, 2017, at 08:48 AM, Dmitry Tantsur wrote:
>>> On 06/19/2017 05:42 PM, Chris Hoge wrote:
 
 
> On Jun 15, 2017, at 5:57 AM, Thierry Carrez  > wrote:
> 
> Sean Dague wrote:
>> [...]
>> I think those are all fine. The other term that popped into my head was
>> "Friends of OpenStack" as a way to describe the openstack-hosted efforts
>> that aren't official projects. It may be too informal, but I do think
>> the OpenStack-Hosted vs. OpenStack might still mix up in people's head.
> 
> My original thinking was to call them "hosted projects" or "host
> projects", but then it felt a bit incomplete. I kinda like the "Friends
> of OpenStack" name, although it seems to imply some kind of vetting that
> we don't actually do.
 
 Why not bring back the name Stackforge and apply that
 to unofficial projects? It’s short, descriptive, and unambiguous.
>>> 
>>> Just keep in mind that people always looked at stackforge projects as
>>> "immature 
>>> experimental projects". I remember getting questions "when is
>>> ironic-inspector 
>>> going to become a real project" because of our stackforge prefix back
>>> then, even 
>>> though it was already used in production.
>> 
>> A few days ago I suggested a variant of Thierry's suggestion below. Get
>> rid of the 'openstack' prefix entirely for hosting and use stackforge
>> for everything. Then officially governed OpenStack projects are hosted
>> just like any other project within infra under the stackforge (or Opium)
>> name. The problem with the current "flat" namespace is that OpenStack
>> means something specific and we have overloaded it for hosting. But we
>> could flip that upside down and host OpenStack within a different flat
>> namespace that represented "project hosting using OpenStack infra
>> tooling”.
> 
> I dunno. I understand that it’s extra work to have two namespaces,
> but it sends a clear message. Approved TC, UC, and Board projects
> remain under openstack, and unofficial move to a name that is not
> openstack (i.e. stackforge/opium/etc).
> 
> As part of a branding exercise, it creates a clear, easy to
> understand, and explain division.
> 
> For names like stackforge being considered a pejorative, 

Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-21 Thread Jeremy Stanley
On 2017-06-19 08:42:04 -0700 (-0700), Chris Hoge wrote:
[...]
> Why not bring back the name Stackforge and apply that to
> unofficial projects? It’s short, descriptive, and unambiguous.
[...]

Logistical points aside, that name is strikingly similar to another
and previously much more popular but now defunct s.*forge source
code hosting platform; that naming pattern can imply a fork of the
original codebase, as is the case for gforge/fusionforge.

"Unambiguous" is in the eye of the beholder.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] realtime kvm cpu affinities

2017-06-21 Thread Chris Friesen

On 06/21/2017 10:46 AM, Henning Schild wrote:

Am Wed, 21 Jun 2017 10:04:52 -0600
schrieb Chris Friesen :



i guess you are talking about that section from [1]:


We could use a host level tunable to just reserve a set of host
pCPUs for running emulator threads globally, instead of trying to
account for it per instance. This would work in the simple case,
but when NUMA is used, it is highly desirable to have more fine
grained config to control emulator thread placement. When real-time
or dedicated CPUs are used, it will be critical to separate
emulator threads for different KVM instances.


Yes, that's the relevant section.


I know it has been considered, but i would like to bring the topic up
again. Because doing it that way allows for many more rt-VMs on a host
and i am not sure i fully understood why the idea was discarded in the
end.

I do not really see the influence of NUMA here. Say the
emulator_pin_set is used only for realtime VMs, we know that the
emulators and IOs can be "slow" so crossing numa-nodes should not be an
issue. Or you could say the set needs to contain at least one core per
numa-node and schedule emulators next to their vcpus.

As we know from our setup, and as Luiz confirmed - it is _not_ "critical
to separate emulator threads for different KVM instances".
They have to be separated from the vcpu-cores but not from each other.
At least not on the "cpuset" basis, maybe "blkio" and cgroups like that.


I'm reluctant to say conclusively that we don't need to separate emulator 
threads since I don't think we've considered all the cases.  For example, what 
happens if one or more of the instances are being live-migrated?  The migration 
thread for those instances will be very busy scanning for dirty pages, which 
could delay the emulator threads for other instances and also cause significant 
cross-NUMA traffic unless we ensure at least one core per NUMA-node.


Also, I don't think we've determined how much CPU time is needed for the 
emulator threads.  If we have ~60 CPUs available for instances split across two 
NUMA nodes, can we safely run the emulator threads of 30 instances all together 
on a single CPU?  If not, how much "emulator overcommit" is allowable?


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Turning TC/UC workgroups into OpenStack SIGs

2017-06-21 Thread Lance Bragstad


On 06/21/2017 11:55 AM, Matt Riedemann wrote:
> On 6/21/2017 11:17 AM, Shamail Tahir wrote:
>>
>>
>> On Wed, Jun 21, 2017 at 12:02 PM, Thierry Carrez
>> > wrote:
>>
>> Shamail Tahir wrote:
>> > In the past, governance has helped (on the UC WG side) to reduce
>> > overlaps/duplication in WGs chartered for similar objectives. I
>> would
>> > like to understand how we will handle this (if at all) with the
>> new SIG
>> > proposa?
>>
>> I tend to think that any overlap/duplication would get solved
>> naturally,
>> without having to force everyone through an application process
>> that may
>> discourage natural emergence of such groups. I feel like an
>> application
>> process would be premature optimization. We can always encourage
>> groups
>> to merge (or clean them up) after the fact. How much
>> overlaps/duplicative groups did you end up having ?
>>
>>
>> Fair point, it wasn't many. The reason I recalled this effort was
>> because we had to go through the exercise after the fact and that
>> made the volume of WGs to review much larger than had we asked the
>> purpose whenever they were created. As long as we check back
>> periodically and not let the work for validation/clean up pile up
>> then this is probably a non-issue.
>>
>>
>> > Also, do we have to replace WGs as a concept or could SIG
>> > augment them? One suggestion I have would be to keep projects
>> on the TC
>> > side and WGs on the UC side and then allow for
>> spin-up/spin-down of SIGs
>> > as needed for accomplishing specific goals/tasks (picture of a 
>> diagram
>> > I created at the Forum[1]).
>>
>> I feel like most groups should be inclusive of all community, so I'd
>> rather see the SIGs being the default, and ops-specific or
>> dev-specific
>> groups the exception. To come back to my Public Cloud WG example,
>> you
>> need to have devs and ops in the same group in the first place
>> before
>> you would spin-up a "address scalability" SIG. Why not just have a
>> Public Cloud SIG in the first place?
>>
>>
>> +1, I interpreted originally that each use-case would be a SIG versus
>> the SIG being able to be segment oriented (in which multiple
>> use-cases could be pursued)
>>
>>
>>  > [...]
>> > Finally, how will this change impact the ATC/AUC status of the SIG
>> > members for voting rights in the TC/UC elections?
>>
>> There are various options. Currently you give UC WG leads the AUC
>> status. We could give any SIG lead both statuses. Or only give
>> the AUC
>> status to a subset of SIGs that the UC deems appropriate. It's
>> really an
>> implementation detail imho. (Also I would expect any SIG lead to
>> already
>> be both AUC and ATC somehow anyway, so that may be a non-issue).
>>
>>
>> We can discuss this later because it really is an implementation
>> detail. Thanks for the answers.
>>
>>
>> --
>> Thierry Carrez (ttx)
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>>
>>
>>
>>
>> -- 
>> Thanks,
>> Shamail Tahir
>> t: @ShamailXD
>> tz: Eastern Time
>>
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> I think a key point you're going to want to convey and repeat ad
> nauseum with this SIG idea is that each SIG is focused on a specific
> use case and they can be spun up and spun down. Assuming that's what
> you want them to be.
>
> One problem I've seen with the various work groups is they overlap in
> a lot of ways but are probably driven as silos. For example, how many
> different work groups are there that care about scaling? So rather
> than have 5 work groups that all overlap on some level for a specific
> issue, create a SIG for that specific issue so the people involved can
> work on defining the specific problem and work to come up with a
> solution that can then be implemented by the upstream development
> teams, either within a single project or across projects depending on
> the issue. And once the specific issue is resolved, close down the SIG.
>
> Examples here would be things that fall under proposed community wide
> goals for a release, like running API services under wsgi, py3
> support, moving policy rules into code, hierarchical quotas, RBAC
> "admin of admins" policy 

Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-21 Thread Jeremy Stanley
On 2017-06-21 09:20:42 -0700 (-0700), Clark Boylan wrote:
[...]
> A few days ago I suggested a variant of Thierry's suggestion below. Get
> rid of the 'openstack' prefix entirely for hosting and use stackforge
> for everything. Then officially governed OpenStack projects are hosted
> just like any other project within infra under the stackforge (or Opium)
> name. The problem with the current "flat" namespace is that OpenStack
> means something specific and we have overloaded it for hosting. But we
> could flip that upside down and host OpenStack within a different flat
> namespace that represented "project hosting using OpenStack infra
> tooling".
> 
> The hosting location isn't meant to convey anything beyond the project
> is hosted on a Gerrit run by infra and tests are run by Zuul.
> stackforge/ is not an (anti)endorsement (and neither is openstack/).
[...]

For a thorough solution, we should probably switch all our
infrastructure to a different domain name too, preferably one which
doesn't mention "openstack" as a substring at all. This would
actually solve a number of other challenges as well, like our
current shared control of the openstack.org domain with the dev team
at the foundation (which presently limits what either the foundation
or the community can do with that domain because we have to take
each others' workflows into account).
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-21 Thread Fox, Kevin M
There already is a user side tools for deploying plumbing onto your own cloud. 
stuff like Tessmaster itself.

I think the win is being able to extend that k8s with the ability to 
declaratively request database clusters and manage them.

Its all about the commons.

If you build a Tessmaster clone just to do mariadb, then you share nothing with 
the other communities and have to reinvent the wheel, yet again. Operators load 
increases because the tool doesn't function like other tools.

If you rely on a container orchestration engine that's already cross cloud that 
can be easily deployed by user or cloud operator, and fill in the gaps with 
what Trove wants to support, easy management of db's, you get to reuse a lot of 
the commons and the users slight increase in investment in dealing with the bit 
of extra plumbing in there allows other things to also be easily added to their 
cluster. Its very rare that a user would need to deploy/manage only a database. 
The net load on the operator decreases, not increases.

Look at helm apps for some examples. They do complex web applications that have 
web tiers, database tiers, etc. But they currently suffer from lack of good 
support for clustered databases. In the end, the majority of users care about 
helm install my_scalable_app kind of things rather then installing all the 
things by hand. Its a pain.

OpenStack itself has this issue. It has lots of an api tiers and a db tiers. If 
Trove was a k8s operator, OpenStack on k8s could use it to deploy the rest of 
OpenStack. Even more sharing.

Thanks,
Kevin

From: Thierry Carrez [thie...@openstack.org]
Sent: Wednesday, June 21, 2017 1:52 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

Zane Bitter wrote:
> [...]
> Until then it seems to me that the tradeoff is between decoupling it
> from the particular cloud it's running on so that users can optionally
> deploy it standalone (essentially Vish's proposed solution for the *aaS
> services from many moons ago) vs. decoupling it from OpenStack in
> general so that the operator has more flexibility in how to deploy.
>
> I'd love to be able to cover both - from a user using it standalone to
> spin up and manage a DB in containers on a shared PaaS, through to a
> user accessing it as a service to provide a DB running on a dedicated VM
> or bare metal server, and everything in between. I don't know is such a
> thing is feasible. I suspect we're going to have to talk a lot about VMs
> and network plumbing and volume storage :)

As another data point, we are seeing this very same tradeoff with Magnum
vs. Tessmaster (with "I want to get a Kubernetes cluster" rather than "I
want to get a database").

Tessmaster is the user-side tool from EBay deploying Kubernetes on
different underlying cloud infrastructures: takes a bunch of cloud
credentials, then deploys, grows and shrinks Kubernetes cluster for you.

Magnum is the infrastructure-side tool from OpenStack giving you
COE-as-a-service, through a provisioning API.

Jay is advocating for Trove to be more like Tessmaster, and less like
Magnum. I think I agree with Zane that those are two different approaches:

From a public cloud provider perspective serving lots of small users, I
think a provisioning API makes sense. The user in that case is in a
"black box" approach, so I think the resulting resources should not
really be accessible as VMs by the tenant, even if they end up being
Nova VMs. The provisioning API could propose several options (K8s or
Mesos, MySQL or PostgreSQL).

From a private cloud / hybrid cloud / large cloud user perspective, the
user-side deployment tool, letting you deploy the software on various
types of infrastructure, probably makes more sense. It's probably more
work to run it, but you gain in flexibility. That user-side tool would
probably not support multiple options, but be application-specific.

So yes, ideally we would cover both. Because they target different
users, and both are right...

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Turning TC/UC workgroups into OpenStack SIGs

2017-06-21 Thread Michał Jastrzębski
One of key components which, imho, made SIGs successful in k8s is
infrastructure behind it.

When someone proposes an issue, they can tag SIG to it. Everyone in
this SIG will be notified that there is an issue they might be
interested it, they check it out and provide feedback. That also
creates additional familiarity with dev toolset for non-dev sig
members. I think what would be important for OpenStack SIGs to be
successful is connecting SIGs to both Launchpad and Gerrit.

For example:
New blueprint is introduced to Kolla-ansible that allows easy PCI
passthrough, we tag HPC and Scientific SIGs and everyone is notified
(via mail) that there is this thing in project Kolla they might want
to check out.
New change is proposed that addresses important issue - also tag SIGs
to encourage their reviews on actual implementation.

I think github gives good all-in-one toolset for SIG mgmt, issue mgmt,
code reviews and all. With our diverse tools this will be more
challenging, but important. And yes, we need SIG people to have
visibility into gerrit. If you ask me what's biggest problem in
OpenStack I'd say that operator community don't review implementation
details enough. Having notifs pushed into them would hopefully change
this a little bit.


On 21 June 2017 at 09:55, Matt Riedemann  wrote:
> On 6/21/2017 11:17 AM, Shamail Tahir wrote:
>>
>>
>>
>> On Wed, Jun 21, 2017 at 12:02 PM, Thierry Carrez > > wrote:
>>
>> Shamail Tahir wrote:
>> > In the past, governance has helped (on the UC WG side) to reduce
>> > overlaps/duplication in WGs chartered for similar objectives. I
>> would
>> > like to understand how we will handle this (if at all) with the new
>> SIG
>> > proposa?
>>
>> I tend to think that any overlap/duplication would get solved
>> naturally,
>> without having to force everyone through an application process that
>> may
>> discourage natural emergence of such groups. I feel like an
>> application
>> process would be premature optimization. We can always encourage
>> groups
>> to merge (or clean them up) after the fact. How much
>> overlaps/duplicative groups did you end up having ?
>>
>>
>> Fair point, it wasn't many. The reason I recalled this effort was because
>> we had to go through the exercise after the fact and that made the volume of
>> WGs to review much larger than had we asked the purpose whenever they were
>> created. As long as we check back periodically and not let the work for
>> validation/clean up pile up then this is probably a non-issue.
>>
>>
>> > Also, do we have to replace WGs as a concept or could SIG
>> > augment them? One suggestion I have would be to keep projects on the
>> TC
>> > side and WGs on the UC side and then allow for spin-up/spin-down of
>> SIGs
>> > as needed for accomplishing specific goals/tasks (picture of a
>> diagram
>> > I created at the Forum[1]).
>>
>> I feel like most groups should be inclusive of all community, so I'd
>> rather see the SIGs being the default, and ops-specific or
>> dev-specific
>> groups the exception. To come back to my Public Cloud WG example, you
>> need to have devs and ops in the same group in the first place before
>> you would spin-up a "address scalability" SIG. Why not just have a
>> Public Cloud SIG in the first place?
>>
>>
>> +1, I interpreted originally that each use-case would be a SIG versus the
>> SIG being able to be segment oriented (in which multiple use-cases could be
>> pursued)
>>
>>
>>  > [...]
>> > Finally, how will this change impact the ATC/AUC status of the SIG
>> > members for voting rights in the TC/UC elections?
>>
>> There are various options. Currently you give UC WG leads the AUC
>> status. We could give any SIG lead both statuses. Or only give the AUC
>> status to a subset of SIGs that the UC deems appropriate. It's really
>> an
>> implementation detail imho. (Also I would expect any SIG lead to
>> already
>> be both AUC and ATC somehow anyway, so that may be a non-issue).
>>
>>
>> We can discuss this later because it really is an implementation detail.
>> Thanks for the answers.
>>
>>
>> --
>> Thierry Carrez (ttx)
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>>
>>
>>
>>
>> --
>> Thanks,
>> Shamail Tahir
>> t: @ShamailXD
>> tz: Eastern Time
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> 

Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-21 Thread Zane Bitter

On 21/06/17 01:49, Mark Kirkwood wrote:

On 21/06/17 02:08, Jay Pipes wrote:


On 06/20/2017 09:42 AM, Doug Hellmann wrote:

Does "service VM" need to be a first-class thing?  Akanda creates
them, using a service user. The VMs are tied to a "router" which
is the billable resource that the user understands and interacts with
through the API.


Frankly, I believe all of these types of services should be built as 
applications that run on OpenStack (or other) infrastructure. In other 
words, they should not be part of the infrastructure itself.


There's really no need for a user of a DBaaS to have access to the 
host or hosts the DB is running on. If the user really wanted that, 
they would just spin up a VM/baremetal server and install the thing 
themselves.




Yes, I think this area is where some hard thinking would be rewarded. I 
recall when I first met Trove, in my mind I expected to be 'carving off 
a piece of database'...and was a bit surprised to discover that it 
(essentially) leveraged Nova VM + OS + DB (no criticism intended - just 
saying I was surprised).


I think this is a common mistake (I know I've made it with respect to 
other services) when hearing about a new *aaS thing and making 
assumptions about the architecture. Here's a helpful way to think about it:


A cloud service has to have robust multitenancy. In the case of DBaaS, 
that gives you two options. You can start with a database that is 
already multitenant. If that works for your users, great. But many users 
just want somebody else to manage $MY_FAVOURITE_DATABASE that is not 
multitenant by design. Your only real option in that case is to give 
them their own copy and isolate it somehow from everyone else's. This is 
the use case that RDS and Trove are designed to solve.


It's important to note that this hasn't changed and isn't going to 
change in the foreseeable future. What *has* changed is that there are 
now more options for "isolate it somehow from everyone else's" - e.g. 
you can use a container instead of a VM.


Of course after delving into how it worked I 
realized that it did make sense to make use of the various Nova things 
(schedulers etc)


Fun fact: Trove started out as a *complete fork* of Nova(!).

*but* now we are thinking about re-architecting 
(plus more options exist now), it would make sense to revisit this area.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Turning TC/UC workgroups into OpenStack SIGs

2017-06-21 Thread Matt Riedemann

On 6/21/2017 11:17 AM, Shamail Tahir wrote:



On Wed, Jun 21, 2017 at 12:02 PM, Thierry Carrez > wrote:


Shamail Tahir wrote:
> In the past, governance has helped (on the UC WG side) to reduce
> overlaps/duplication in WGs chartered for similar objectives. I would
> like to understand how we will handle this (if at all) with the new SIG
> proposa?

I tend to think that any overlap/duplication would get solved naturally,
without having to force everyone through an application process that may
discourage natural emergence of such groups. I feel like an application
process would be premature optimization. We can always encourage groups
to merge (or clean them up) after the fact. How much
overlaps/duplicative groups did you end up having ?


Fair point, it wasn't many. The reason I recalled this effort was 
because we had to go through the exercise after the fact and that made 
the volume of WGs to review much larger than had we asked the purpose 
whenever they were created. As long as we check back periodically and 
not let the work for validation/clean up pile up then this is probably a 
non-issue.



> Also, do we have to replace WGs as a concept or could SIG
> augment them? One suggestion I have would be to keep projects on the TC
> side and WGs on the UC side and then allow for spin-up/spin-down of SIGs
> as needed for accomplishing specific goals/tasks (picture of a  diagram
> I created at the Forum[1]).

I feel like most groups should be inclusive of all community, so I'd
rather see the SIGs being the default, and ops-specific or dev-specific
groups the exception. To come back to my Public Cloud WG example, you
need to have devs and ops in the same group in the first place before
you would spin-up a "address scalability" SIG. Why not just have a
Public Cloud SIG in the first place?


+1, I interpreted originally that each use-case would be a SIG versus 
the SIG being able to be segment oriented (in which multiple use-cases 
could be pursued)



 > [...]
> Finally, how will this change impact the ATC/AUC status of the SIG
> members for voting rights in the TC/UC elections?

There are various options. Currently you give UC WG leads the AUC
status. We could give any SIG lead both statuses. Or only give the AUC
status to a subset of SIGs that the UC deems appropriate. It's really an
implementation detail imho. (Also I would expect any SIG lead to already
be both AUC and ATC somehow anyway, so that may be a non-issue).


We can discuss this later because it really is an implementation detail. 
Thanks for the answers.



--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--
Thanks,
Shamail Tahir
t: @ShamailXD
tz: Eastern Time


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I think a key point you're going to want to convey and repeat ad nauseum 
with this SIG idea is that each SIG is focused on a specific use case 
and they can be spun up and spun down. Assuming that's what you want 
them to be.


One problem I've seen with the various work groups is they overlap in a 
lot of ways but are probably driven as silos. For example, how many 
different work groups are there that care about scaling? So rather than 
have 5 work groups that all overlap on some level for a specific issue, 
create a SIG for that specific issue so the people involved can work on 
defining the specific problem and work to come up with a solution that 
can then be implemented by the upstream development teams, either within 
a single project or across projects depending on the issue. And once the 
specific issue is resolved, close down the SIG.


Examples here would be things that fall under proposed community wide 
goals for a release, like running API services under wsgi, py3 support, 
moving policy rules into code, hierarchical quotas, RBAC "admin of 
admins" policy changes, etc. Have a SIG that is comprised of people with 
different roles (project managers, product managers, operators, 
developers, docs, QA) that are focused on solving that one specific 
issue and drive it, and then close it down once some completion criteria 
is met.


That still doesn't mean you're going to get the attendance you need from 

Re: [openstack-dev] realtime kvm cpu affinities

2017-06-21 Thread Henning Schild
Am Wed, 21 Jun 2017 10:04:52 -0600
schrieb Chris Friesen :

> On 06/21/2017 09:45 AM, Chris Friesen wrote:
> > On 06/21/2017 02:42 AM, Henning Schild wrote:  
> >> Am Tue, 20 Jun 2017 10:41:44 -0600
> >> schrieb Chris Friesen :  
> >  
>  Our goal is to reach a high packing density of realtime VMs. Our
>  pragmatic first choice was to run all non-vcpu-threads on a
>  shared set of pcpus where we also run best-effort VMs and host
>  load. Now the OpenStack guys are not too happy with that because
>  that is load outside the assigned resources, which leads to
>  quota and accounting problems.  
> >>>
> >>> If you wanted to go this route, you could just edit the
> >>> "vcpu_pin_set" entry in nova.conf on the compute nodes so that
> >>> nova doesn't actually know about all of the host vCPUs.  Then you
> >>> could run host load and emulator threads on the pCPUs that nova
> >>> doesn't know about, and there will be no quota/accounting issues
> >>> in nova.  
> >>
> >> Exactly that is the idea but OpenStack currently does not allow
> >> that. No thread will ever end up on a core outside the
> >> vcpu_pin_set and emulator/io-threads are controlled by
> >> OpenStack/libvirt.  
> >
> > Ah, right.  This will isolate the host load from the guest load,
> > but it will leave the guest emulator work running on the same pCPUs
> > as one or more vCPU threads.
> >
> > Your emulator_pin_set idea is interesting...it might be worth
> > proposing in nova.  
> 
> Actually, based on [1] it appears they considered it and decided that
> it didn't provide enough isolation between realtime VMs.

Hey Chris,

i guess you are talking about that section from [1]:

>>> We could use a host level tunable to just reserve a set of host
>>> pCPUs for running emulator threads globally, instead of trying to
>>> account for it per instance. This would work in the simple case,
>>> but when NUMA is used, it is highly desirable to have more fine
>>> grained config to control emulator thread placement. When real-time
>>> or dedicated CPUs are used, it will be critical to separate
>>> emulator threads for different KVM instances.

I know it has been considered, but i would like to bring the topic up
again. Because doing it that way allows for many more rt-VMs on a host
and i am not sure i fully understood why the idea was discarded in the
end.

I do not really see the influence of NUMA here. Say the
emulator_pin_set is used only for realtime VMs, we know that the
emulators and IOs can be "slow" so crossing numa-nodes should not be an
issue. Or you could say the set needs to contain at least one core per
numa-node and schedule emulators next to their vcpus.

As we know from our setup, and as Luiz confirmed - it is _not_ "critical
to separate emulator threads for different KVM instances".
They have to be separated from the vcpu-cores but not from each other.
At least not on the "cpuset" basis, maybe "blkio" and cgroups like that.

Henning

> Chris
> 
> [1] 
> https://specs.openstack.org/openstack/nova-specs/specs/ocata/approved/libvirt-emulator-threads-policy.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-21 Thread Chris Hoge

> On Jun 21, 2017, at 9:20 AM, Clark Boylan  wrote:
> 
> On Wed, Jun 21, 2017, at 08:48 AM, Dmitry Tantsur wrote:
>> On 06/19/2017 05:42 PM, Chris Hoge wrote:
>>> 
>>> 
 On Jun 15, 2017, at 5:57 AM, Thierry Carrez  wrote:
 
 Sean Dague wrote:
> [...]
> I think those are all fine. The other term that popped into my head was
> "Friends of OpenStack" as a way to describe the openstack-hosted efforts
> that aren't official projects. It may be too informal, but I do think
> the OpenStack-Hosted vs. OpenStack might still mix up in people's head.
 
 My original thinking was to call them "hosted projects" or "host
 projects", but then it felt a bit incomplete. I kinda like the "Friends
 of OpenStack" name, although it seems to imply some kind of vetting that
 we don't actually do.
>>> 
>>> Why not bring back the name Stackforge and apply that
>>> to unofficial projects? It’s short, descriptive, and unambiguous.
>> 
>> Just keep in mind that people always looked at stackforge projects as
>> "immature 
>> experimental projects". I remember getting questions "when is
>> ironic-inspector 
>> going to become a real project" because of our stackforge prefix back
>> then, even 
>> though it was already used in production.
> 
> A few days ago I suggested a variant of Thierry's suggestion below. Get
> rid of the 'openstack' prefix entirely for hosting and use stackforge
> for everything. Then officially governed OpenStack projects are hosted
> just like any other project within infra under the stackforge (or Opium)
> name. The problem with the current "flat" namespace is that OpenStack
> means something specific and we have overloaded it for hosting. But we
> could flip that upside down and host OpenStack within a different flat
> namespace that represented "project hosting using OpenStack infra
> tooling”.

I dunno. I understand that it’s extra work to have two namespaces,
but it sends a clear message. Approved TC, UC, and Board projects
remain under openstack, and unofficial move to a name that is not
openstack (i.e. stackforge/opium/etc).

As part of a branding exercise, it creates a clear, easy to
understand, and explain division.

For names like stackforge being considered a pejorative, we can
work as a community against that. I know that when I was helping run
the puppet modules under stackforge, I was proud of the work and
understood it to mean that it was a community supported, but not
official project. I was pretty sad when stackforge went away, precisely
because of the confusion we’re experiencing with ‘big tent’ today.


> The hosting location isn't meant to convey anything beyond the project
> is hosted on a Gerrit run by infra and tests are run by Zuul.
> stackforge/ is not an (anti)endorsement (and neither is openstack/).
> 
> Unfortunately, I expect that doing this will also result in a bunch of
> confusion around "why is OpenStack being renamed", "what is happening to
> OpenStack governance", etc.
> 
 An alternative would be to give "the OpenStack project infrastructure"
 some kind of a brand name (say, "Opium", for OpenStack project
 infrastructure ultimate madness) and then call the hosted projects
 "Opium projects". Rename the Infra team to Opium team, and voilà!
 -- 
 Thierry Carrez (ttx)
> 
> Clark
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
> ?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Required Ceph rbd image features

2017-06-21 Thread Jon Bernard
* Chris MacNaughton  wrote:
> Hello,
> 
> I'm working on identifying the required RBD image features to be compatible
> with both the nova-kvm and nova-lxd drivers. The requirement derives from
> nova-lxd using the kernel driver for Ceph, while nova-kvm handles rbd
> through userspace.

I believe kernel rbd supports image format 2 since linux ~3.11.  There
was a time when striping feature was not supported in kernel rbd, I'm
not sure if that's still the case today but should be easy to test for.
I suspect you'd want to enable layering at minimum.

-- 
Jon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-21 Thread Clark Boylan
On Wed, Jun 21, 2017, at 08:48 AM, Dmitry Tantsur wrote:
> On 06/19/2017 05:42 PM, Chris Hoge wrote:
> > 
> > 
> >> On Jun 15, 2017, at 5:57 AM, Thierry Carrez  wrote:
> >>
> >> Sean Dague wrote:
> >>> [...]
> >>> I think those are all fine. The other term that popped into my head was
> >>> "Friends of OpenStack" as a way to describe the openstack-hosted efforts
> >>> that aren't official projects. It may be too informal, but I do think
> >>> the OpenStack-Hosted vs. OpenStack might still mix up in people's head.
> >>
> >> My original thinking was to call them "hosted projects" or "host
> >> projects", but then it felt a bit incomplete. I kinda like the "Friends
> >> of OpenStack" name, although it seems to imply some kind of vetting that
> >> we don't actually do.
> > 
> > Why not bring back the name Stackforge and apply that
> > to unofficial projects? It’s short, descriptive, and unambiguous.
> 
> Just keep in mind that people always looked at stackforge projects as
> "immature 
> experimental projects". I remember getting questions "when is
> ironic-inspector 
> going to become a real project" because of our stackforge prefix back
> then, even 
> though it was already used in production.

A few days ago I suggested a variant of Thierry's suggestion below. Get
rid of the 'openstack' prefix entirely for hosting and use stackforge
for everything. Then officially governed OpenStack projects are hosted
just like any other project within infra under the stackforge (or Opium)
name. The problem with the current "flat" namespace is that OpenStack
means something specific and we have overloaded it for hosting. But we
could flip that upside down and host OpenStack within a different flat
namespace that represented "project hosting using OpenStack infra
tooling".

The hosting location isn't meant to convey anything beyond the project
is hosted on a Gerrit run by infra and tests are run by Zuul.
stackforge/ is not an (anti)endorsement (and neither is openstack/).

Unfortunately, I expect that doing this will also result in a bunch of
confusion around "why is OpenStack being renamed", "what is happening to
OpenStack governance", etc.

> >> An alternative would be to give "the OpenStack project infrastructure"
> >> some kind of a brand name (say, "Opium", for OpenStack project
> >> infrastructure ultimate madness) and then call the hosted projects
> >> "Opium projects". Rename the Infra team to Opium team, and voilà!
> >> -- 
> >> Thierry Carrez (ttx)

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Turning TC/UC workgroups into OpenStack SIGs

2017-06-21 Thread Shamail Tahir
On Wed, Jun 21, 2017 at 12:02 PM, Thierry Carrez 
wrote:

> Shamail Tahir wrote:
> > In the past, governance has helped (on the UC WG side) to reduce
> > overlaps/duplication in WGs chartered for similar objectives. I would
> > like to understand how we will handle this (if at all) with the new SIG
> > proposa?
>
> I tend to think that any overlap/duplication would get solved naturally,
> without having to force everyone through an application process that may
> discourage natural emergence of such groups. I feel like an application
> process would be premature optimization. We can always encourage groups
> to merge (or clean them up) after the fact. How much
> overlaps/duplicative groups did you end up having ?
>

Fair point, it wasn't many. The reason I recalled this effort was because
we had to go through the exercise after the fact and that made the volume
of WGs to review much larger than had we asked the purpose whenever they
were created. As long as we check back periodically and not let the work
for validation/clean up pile up then this is probably a non-issue.

>
> > Also, do we have to replace WGs as a concept or could SIG
> > augment them? One suggestion I have would be to keep projects on the TC
> > side and WGs on the UC side and then allow for spin-up/spin-down of SIGs
> > as needed for accomplishing specific goals/tasks (picture of a  diagram
> > I created at the Forum[1]).
>
> I feel like most groups should be inclusive of all community, so I'd
> rather see the SIGs being the default, and ops-specific or dev-specific
> groups the exception. To come back to my Public Cloud WG example, you
> need to have devs and ops in the same group in the first place before
> you would spin-up a "address scalability" SIG. Why not just have a
> Public Cloud SIG in the first place?
>

+1, I interpreted originally that each use-case would be a SIG versus the
SIG being able to be segment oriented (in which multiple use-cases could be
pursued)

>
> > [...]
> > Finally, how will this change impact the ATC/AUC status of the SIG
> > members for voting rights in the TC/UC elections?
>
> There are various options. Currently you give UC WG leads the AUC
> status. We could give any SIG lead both statuses. Or only give the AUC
> status to a subset of SIGs that the UC deems appropriate. It's really an
> implementation detail imho. (Also I would expect any SIG lead to already
> be both AUC and ATC somehow anyway, so that may be a non-issue).
>

We can discuss this later because it really is an implementation detail.
Thanks for the answers.

>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Thanks,
Shamail Tahir
t: @ShamailXD
tz: Eastern Time
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Turning TC/UC workgroups into OpenStack SIGs

2017-06-21 Thread Thierry Carrez
Shamail Tahir wrote:
> In the past, governance has helped (on the UC WG side) to reduce
> overlaps/duplication in WGs chartered for similar objectives. I would
> like to understand how we will handle this (if at all) with the new SIG
> proposa?

I tend to think that any overlap/duplication would get solved naturally,
without having to force everyone through an application process that may
discourage natural emergence of such groups. I feel like an application
process would be premature optimization. We can always encourage groups
to merge (or clean them up) after the fact. How much
overlaps/duplicative groups did you end up having ?

> Also, do we have to replace WGs as a concept or could SIG
> augment them? One suggestion I have would be to keep projects on the TC
> side and WGs on the UC side and then allow for spin-up/spin-down of SIGs
> as needed for accomplishing specific goals/tasks (picture of a  diagram
> I created at the Forum[1]).

I feel like most groups should be inclusive of all community, so I'd
rather see the SIGs being the default, and ops-specific or dev-specific
groups the exception. To come back to my Public Cloud WG example, you
need to have devs and ops in the same group in the first place before
you would spin-up a "address scalability" SIG. Why not just have a
Public Cloud SIG in the first place?

> [...]
> Finally, how will this change impact the ATC/AUC status of the SIG
> members for voting rights in the TC/UC elections?

There are various options. Currently you give UC WG leads the AUC
status. We could give any SIG lead both statuses. Or only give the AUC
status to a subset of SIGs that the UC deems appropriate. It's really an
implementation detail imho. (Also I would expect any SIG lead to already
be both AUC and ATC somehow anyway, so that may be a non-issue).

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-21 Thread Dmitry Tantsur

On 06/19/2017 05:42 PM, Chris Hoge wrote:




On Jun 15, 2017, at 5:57 AM, Thierry Carrez  wrote:

Sean Dague wrote:

[...]
I think those are all fine. The other term that popped into my head was
"Friends of OpenStack" as a way to describe the openstack-hosted efforts
that aren't official projects. It may be too informal, but I do think
the OpenStack-Hosted vs. OpenStack might still mix up in people's head.


My original thinking was to call them "hosted projects" or "host
projects", but then it felt a bit incomplete. I kinda like the "Friends
of OpenStack" name, although it seems to imply some kind of vetting that
we don't actually do.


Why not bring back the name Stackforge and apply that
to unofficial projects? It’s short, descriptive, and unambiguous.


Just keep in mind that people always looked at stackforge projects as "immature 
experimental projects". I remember getting questions "when is ironic-inspector 
going to become a real project" because of our stackforge prefix back then, even 
though it was already used in production.




-Chris


An alternative would be to give "the OpenStack project infrastructure"
some kind of a brand name (say, "Opium", for OpenStack project
infrastructure ultimate madness) and then call the hosted projects
"Opium projects". Rename the Infra team to Opium team, and voilà!
--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] realtime kvm cpu affinities

2017-06-21 Thread Chris Friesen

On 06/21/2017 02:42 AM, Henning Schild wrote:

Am Tue, 20 Jun 2017 10:41:44 -0600
schrieb Chris Friesen :



Our goal is to reach a high packing density of realtime VMs. Our
pragmatic first choice was to run all non-vcpu-threads on a shared
set of pcpus where we also run best-effort VMs and host load.
Now the OpenStack guys are not too happy with that because that is
load outside the assigned resources, which leads to quota and
accounting problems.


If you wanted to go this route, you could just edit the
"vcpu_pin_set" entry in nova.conf on the compute nodes so that nova
doesn't actually know about all of the host vCPUs.  Then you could
run host load and emulator threads on the pCPUs that nova doesn't
know about, and there will be no quota/accounting issues in nova.


Exactly that is the idea but OpenStack currently does not allow that.
No thread will ever end up on a core outside the vcpu_pin_set and
emulator/io-threads are controlled by OpenStack/libvirt.


Ah, right.  This will isolate the host load from the guest load, but it will 
leave the guest emulator work running on the same pCPUs as one or more vCPU threads.


Your emulator_pin_set idea is interesting...it might be worth proposing in nova.

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Turning TC/UC workgroups into OpenStack SIGs

2017-06-21 Thread Shamail Tahir
Hi,

In the past, governance has helped (on the UC WG side) to reduce
overlaps/duplication in WGs chartered for similar objectives. I would like
to understand how we will handle this (if at all) with the new SIG proposa?
Also, do we have to replace WGs as a concept or could SIG augment them? One
suggestion I have would be to keep projects on the TC side and WGs on the
UC side and then allow for spin-up/spin-down of SIGs as needed for
accomplishing specific goals/tasks (picture of a  diagram I created at the
Forum[1]).

The WGs could focus on defining key objectives for users of a shared group
(market vertical like Enterprise or Scientific WG, horizontal function like
PWG) and then SIGs could be created based on this list to accomplish the
objective and spin-down. Similarly a project team could determine a need to
gather additional data/requirements or need help with a certain task could
also spin-up a SIG to accomplish it (e.g. updating an outdated docs set,
discussion on a specific spec that needs to be more thoroughly crafted,
etc.)

Finally, how will this change impact the ATC/AUC status of the SIG members
for voting rights in the TC/UC elections?

[1] https://drive.google.com/file/d/0B_yCSDGnhIbzS3V1b1lpZGpIaHBmc29S
aUdiYzJtX21BWkl3/

Thanks,
Shamail


On Wed, Jun 21, 2017 at 11:26 AM, Thierry Carrez 
wrote:

> Matt Riedemann wrote:
> > How does the re-branding or re-categorization of these groups solve the
> > actual feedback problem? If the problem is getting different people from
> > different groups together, how does this solve that? For example, how do
> > we get upstream developers aware of operator issues or product managers
> > communicating their needs and feature priorities to the upstream
> > developers?
>
> My hope is that specific developers interested in a given use case or a
> given problem space would join the corresponding SIG and discuss with
> operators in the same SIG. As an example, imagine an upstream developer
> from CERN, able to join the Scientific SIG to discuss with operators and
> users with Scientific/Academic needs of the feature gap, and group with
> other like-minded developers to get that feature gap collectively
> addressed.
>
> > No one can join all work groups or SIGs and be aware of all
> > things at the same time, and actually have time to do anything else.
> > Is the number of various work groups/SIGs a problem?
>
> I would not expect everyone to join every SIG. I would actually expect
> most people to join 0 or 1 SIG.
>
> > Maybe what I'd need is an example of an existing problem case and how
> > the new SIG model would fix that - concrete examples would be really
> > appreciated when communicating suggested governance changes.
> >
> > For example, is there some feature/requirement/issue that one group has
> > wanted implemented/fixed for a long time but another group isn't aware
> > of it? How would SIGs fix that in a way that work groups haven't?
>
> Two examples:
>
> - the "API WG" was started by people on the UC side, listed as a UC
> workgroup, and wasn't making much progress as it was missing devs. Now
> it's been reborn as a TC workgroup, led by a couple of devs, and is
> lacking app user input. Artificial barriers discourage people to join.
> Let's just call all of them SIGs.
>
> - the "Public Cloud WG" tries to cover an extremely important use case
> for all of OpenStack (we all need successful OpenStack public clouds).
> However, so far I've hardly seen a developer joining, because it's seen
> as an Ops group just trying to make requirements emerge. I want the few
> developers that OVH or CityCloud or other public clouds are ready to
> throw upstream to use the rebranded "Public Cloud SIG" as a rally point,
> to coordinate their actions. Because if they try to affect upstream
> separately, they won't go far, and we badly need them involved.
>
> Yes, it's mostly a rebranding exercise, but perception matters.
> Hope this clarifies,
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Thanks,
Shamail Tahir
t: @ShamailXD
tz: Eastern Time
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Do we still support core plugin not based on the ML2 framework?

2017-06-21 Thread Édouard Thuleau
Hi,

@Chaoyi,
I don't want to change the core plugin interface. But I'm not sure we
are talking about the same interface. I had a very quick look into the
tricycle code and I think it uses the NeutronDbPluginV2 interface [1]
which implements the Neutron DB model. Our Contrail Neutron plugin
implements the NeutronPluginBaseV2 interface [2]. Anyway,
NeutronDbPluginV2 is inheriting from NeutronPluginBaseV2 [3].
Thanks for the pointer to the stadium paragraph.

@Kevin,
Service plugins loaded by default are defined in a contant list [4]
and I don't see how I can remove a default service plugin to be loaded
[5].

[1] 
https://github.com/openstack/tricircle/blob/master/tricircle/network/central_plugin.py#L128
[2] 
https://github.com/Juniper/contrail-neutron-plugin/blob/master/neutron_plugin_contrail/plugins/opencontrail/contrail_plugin_base.py#L113
[3] 
https://github.com/openstack/neutron/blob/master/neutron/db/db_base_plugin_v2.py#L125
[4] 
https://github.com/openstack/neutron/blob/master/neutron/plugins/common/constants.py#L43
[5] https://github.com/openstack/neutron/blob/master/neutron/manager.py#L190

Édouard.

On Wed, Jun 21, 2017 at 11:22 AM, Kevin Benton  wrote:
> Why not just delete the service plugins you don't support from the default
> plugins dict?
>
> On Wed, Jun 21, 2017 at 1:45 AM, Édouard Thuleau 
> wrote:
>>
>> Ok, we would like to help on that. How we can start?
>>
>> I think the issue I raise in that thread must be the first point to
>> address and my second proposition seems to be the correct one. What do
>> you think?
>> But it will needs some time and not sure we'll be able to fix all
>> service plugins loaded by default before the next Pike release.
>>
>> I like to propose a workaround until all default service plugins will
>> be compatible with non-DB core plugins. We can continue to load that
>> default service plugins list but authorizing a core plugin to disable
>> it completely with a private attribut on the core plugin class like
>> it's done for bulk/pagination/sorting operations.
>>
>> Of course, we need to add the ability to report any regression on
>> that. I think unit tests will help and we can also work on a
>> functional test based on a fake non-DB core plugin.
>>
>> Regards,
>> Édouard.
>>
>> On Tue, Jun 20, 2017 at 12:09 AM, Kevin Benton  wrote:
>> > The issue is mainly developer resources. Everyone currently working
>> > upstream
>> > doesn't have the bandwidth to keep adding/reviewing the layers of
>> > interfaces
>> > to make the DB optional that go untested. (None of the projects that
>> > would
>> > use them run a CI system that reports results on Neutron patches.)
>> >
>> > I think we can certainly accept patches to do the things you are
>> > proposing,
>> > but there is no guarantee that it won't regress to being DB-dependent
>> > until
>> > there is something reporting results back telling us when it breaks.
>> >
>> > So it's not that the community is against non-DB core plugins, it's just
>> > that the people developing those plugins don't participate in the
>> > community
>> > to ensure they work.
>> >
>> > Cheers
>> >
>> >
>> > On Mon, Jun 19, 2017 at 2:15 AM, Édouard Thuleau
>> > 
>> > wrote:
>> >>
>> >> Oops, sent too fast, sorry. I try again.
>> >>
>> >> Hi,
>> >>
>> >> Since Mitaka release, a default service plugins list is loaded when
>> >> Neutron
>> >> server starts [1]. That list is not editable and was extended with few
>> >> services
>> >> [2]. But all of them rely on the Neutron DB model.
>> >>
>> >> If a core driver is not based on the ML2 core plugin framework or not
>> >> based on
>> >> the 'neutron.db.models_v2' class, all that service plugins will not
>> >> work.
>> >>
>> >> So my first question is Does Neutron still support core plugin not
>> >> based
>> >> on ML2
>> >> or 'neutron.db.models_v2' class?
>> >>
>> >> If yes, I would like to propose two solutions:
>> >> - permits core plugin to overload the service plugin class by it's own
>> >> implementation and continuing to use the actual Neutron db based
>> >> services
>> >> as
>> >> default.
>> >> - modifying all default plugin service to use service plugin driver
>> >> framework [3], and set the actual Neutron db based implementation as
>> >> default driver for services. That permits to core drivers not based on
>> >> the
>> >> Neutron DB to specify a driver. We can see that solution was adopted in
>> >> the
>> >> networking-bgpvpn project, where can find two abstract driver classes,
>> >> one
>> >> for
>> >> core driver based on Neutron DB model [4] and one used by core driver
>> >> not
>> >> based
>> >> on the DB [5] as the Contrail driver [6].
>> >>
>> >> [1]
>> >>
>> >> https://github.com/openstack/neutron/commit/aadf2f30f84dff3d85f380a7ff4e16dbbb0c6bb0#diff-9169a6595980d19b2649d5bedfff05ce
>> >> [2]
>> >>
>> >> https://github.com/openstack/neutron/blob/master/neutron/plugins/common/constants.py#L43

Re: [openstack-dev] [FEMDC][MassivelyDistributed] Strawman proposal for message bus analysis

2017-06-21 Thread Jay Pipes

On 06/21/2017 11:31 AM, Ken Giusti wrote:
On Wed, Jun 21, 2017 at 11:24 AM, Jay Pipes > wrote:


On 06/21/2017 09:23 AM, Ken Giusti wrote:

Andy and I have taken a stab at defining some test scenarios for
anal the different message bus...


That was a particularly unfortunatey choice of words.

Ugh. Sorry - most unfortunate fat-finger...or Freudian slip...


LOL, no worries. It made my morning so far.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FEMDC][MassivelyDistributed] Strawman proposal for message bus analysis

2017-06-21 Thread Ken Giusti
On Wed, Jun 21, 2017 at 11:24 AM, Jay Pipes  wrote:

> On 06/21/2017 09:23 AM, Ken Giusti wrote:
>
>> Andy and I have taken a stab at defining some test scenarios for anal the
>> different message bus...
>>
>
> That was a particularly unfortunatey choice of words.
>
>
Ugh. Sorry - most unfortunate fat-finger...or Freudian slip...


>
> Best,
> -jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Ken Giusti  (kgiu...@gmail.com)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Turning TC/UC workgroups into OpenStack SIGs

2017-06-21 Thread Thierry Carrez
Matt Riedemann wrote:
> How does the re-branding or re-categorization of these groups solve the
> actual feedback problem? If the problem is getting different people from
> different groups together, how does this solve that? For example, how do
> we get upstream developers aware of operator issues or product managers
> communicating their needs and feature priorities to the upstream
> developers?

My hope is that specific developers interested in a given use case or a
given problem space would join the corresponding SIG and discuss with
operators in the same SIG. As an example, imagine an upstream developer
from CERN, able to join the Scientific SIG to discuss with operators and
users with Scientific/Academic needs of the feature gap, and group with
other like-minded developers to get that feature gap collectively addressed.

> No one can join all work groups or SIGs and be aware of all
> things at the same time, and actually have time to do anything else.
> Is the number of various work groups/SIGs a problem?

I would not expect everyone to join every SIG. I would actually expect
most people to join 0 or 1 SIG.

> Maybe what I'd need is an example of an existing problem case and how
> the new SIG model would fix that - concrete examples would be really
> appreciated when communicating suggested governance changes.
> 
> For example, is there some feature/requirement/issue that one group has
> wanted implemented/fixed for a long time but another group isn't aware
> of it? How would SIGs fix that in a way that work groups haven't?

Two examples:

- the "API WG" was started by people on the UC side, listed as a UC
workgroup, and wasn't making much progress as it was missing devs. Now
it's been reborn as a TC workgroup, led by a couple of devs, and is
lacking app user input. Artificial barriers discourage people to join.
Let's just call all of them SIGs.

- the "Public Cloud WG" tries to cover an extremely important use case
for all of OpenStack (we all need successful OpenStack public clouds).
However, so far I've hardly seen a developer joining, because it's seen
as an Ops group just trying to make requirements emerge. I want the few
developers that OVH or CityCloud or other public clouds are ready to
throw upstream to use the rebranded "Public Cloud SIG" as a rally point,
to coordinate their actions. Because if they try to affect upstream
separately, they won't go far, and we badly need them involved.

Yes, it's mostly a rebranding exercise, but perception matters.
Hope this clarifies,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FEMDC][MassivelyDistributed] Strawman proposal for message bus analysis

2017-06-21 Thread Jay Pipes

On 06/21/2017 09:23 AM, Ken Giusti wrote:
Andy and I have taken a stab at defining some test scenarios for anal 
the different message bus...


That was a particularly unfortunate choice of words.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FEMDC][MassivelyDistributed] Strawman proposal for message bus analysis

2017-06-21 Thread Jay Pipes

On 06/21/2017 09:23 AM, Ken Giusti wrote:
Andy and I have taken a stab at defining some test scenarios for anal 
the different message bus...


That was a particularly unfortunatey choice of words.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Turning TC/UC workgroups into OpenStack SIGs

2017-06-21 Thread Matt Riedemann

On 6/21/2017 9:59 AM, Thierry Carrez wrote:

Hi everyone,

One of the areas identified as a priority by the Board + TC + UC
workshop in March was the need to better close the feedback loop and
make unanswered requirements emerge. Part of the solution is to ensure
that groups that look at specific use cases, or specific problem spaces
within OpenStack get participation from a wide spectrum of roles, from
pure operators of OpenStack clouds, to upstream developers, product
managers, researchers, and every combination thereof. In the past year
we reorganized the Design Summit event, so that the design / planning /
feedback gathering part of it would be less dev- or ops-branded, to
encourage participation of everyone in a neutral ground, based on the
topic being discussed. That was just a first step.

In OpenStack we have a number of "working groups", groups of people
interested in discussing a given use case, or addressing a given problem
space across all of OpenStack. Examples include the API working group,
the Deployment working group, the Public clouds working group, the
Telco/NFV working group, or the Scientific working group. However, for
governance reasons, those are currently set up either as a User
Committee working group[1], or a working group depending on the
Technical Committee[2]. This branding of working groups artificially
discourages participation from one side to the others group, for no
specific reason. This needs to be fixed.

We propose to take a page out of Kubernetes playbook and set up "SIGs"
(special interest groups), that would be primarily defined by their
mission (i.e. the use case / problem space the group wants to
collectively address). Those SIGs would not be Ops SIGs or Dev SIGs,
they would just be OpenStack SIGs. While possible some groups will lean
more towards an operator or dev focus (based on their mission), it is
important to encourage everyone to join in early and often. SIGs could
be very easily set up, just by adding your group to a wiki page,
defining the mission of the group, a contact point and details on
meetings (if the group has any). No need for prior vetting by any
governance body. The TC and UC would likely still clean up dead SIGs
from the list, to keep it relevant and tidy. Since they are neither dev
or ops, SIGs would not use the -dev or the -operators lists: they would
use a specific ML (openstack-sigs ?) to hold their discussions without
cross-posting, with appropriate subject tagging.

Not everything would become a SIG. Upstream project teams would remain
the same (although some of them, like Security, might turn into a SIG).
Teams under the UC that are purely operator-facing (like the Ops Tags
Team or the AUC recognition team) would likewise stay as UC subteams.

Comments, thoughts ?

[1]
https://wiki.openstack.org/wiki/Governance/Foundation/UserCommittee#Working_Groups_and_Teams
[2] https://wiki.openstack.org/wiki/Upstream_Working_Groups



How does the re-branding or re-categorization of these groups solve the 
actual feedback problem? If the problem is getting different people from 
different groups together, how does this solve that? For example, how do 
we get upstream developers aware of operator issues or product managers 
communicating their needs and feature priorities to the upstream 
developers? No one can join all work groups or SIGs and be aware of all 
things at the same time, and actually have time to do anything else.


Is the number of various work groups/SIGs a problem?

Maybe what I'd need is an example of an existing problem case and how 
the new SIG model would fix that - concrete examples would be really 
appreciated when communicating suggested governance changes.


For example, is there some feature/requirement/issue that one group has 
wanted implemented/fixed for a long time but another group isn't aware 
of it? How would SIGs fix that in a way that work groups haven't?


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] realtime kvm cpu affinities

2017-06-21 Thread Henning Schild
Am Wed, 21 Jun 2017 09:32:42 -0400
schrieb Luiz Capitulino :

> On Wed, 21 Jun 2017 12:47:27 +0200
> Henning Schild  wrote:
> 
> > > What is your solution?
> > 
> > We have a kilo-based prototype that introduced emulator_pin_set in
> > nova.conf. All vcpu threads will be scheduled on vcpu_pin_set and
> > emulators and IO of all VMs will share emulator_pin_set.
> > vcpu_pin_set contains isolcpus from the host and emulator_pin_set
> > contains best-effort cores from the host.  
> 
> You lost me here a bit as I'm not familiar with OpenStack
> configuration.

Does not matter, i guess you got the point and some other people might
find that useful.

> > That basically means you put all emulators and io of all VMs onto a
> > set of cores that the host potentially also uses for other stuff.
> > Sticking with the made up numbers from above, all the 0.05s can
> > share pcpus.  
> 
> So, this seems to be way we use KVM-RT without OpenStack: emulator
> threads and io threads run on the host housekeeping cores, where all
> other host processes will run. IOW, you only reserve pcpus for vcpus
> threads.

Thanks for the input. I think you confirmend that the current
implementation in openstack can not work and that the new proposal and
our approach should work.
Now we will have to see how to proceed with that information in the
openstack community.

> I can't comment on OpenStack accounting trade-off/implications of
> doing this, but from KVM-RT perspective this is probably the best
> solution. I say "probably" because so far we have only tested with
> cyclictest and simple applications. I don't know if more complex
> applications would have different needs wrt I/O threads for example.

We have a networking ping/pong cyclictest kind of thing and much more
complex setups. Emulators and IO are not on the critical path in our
examples.

> PS: OpenStack devel list refuses emails from non-subscribers. I won't
> subscribe for a one-time discussion, so my emails are not
> reaching the list...

Yeah had the same problem, also with their gerrit. Lets just call it
Stack ... I kept all your text in my replies, and they end up on the
list.

Henning

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][tc] Turning TC/UC workgroups into OpenStack SIGs

2017-06-21 Thread Thierry Carrez
Hi everyone,

One of the areas identified as a priority by the Board + TC + UC
workshop in March was the need to better close the feedback loop and
make unanswered requirements emerge. Part of the solution is to ensure
that groups that look at specific use cases, or specific problem spaces
within OpenStack get participation from a wide spectrum of roles, from
pure operators of OpenStack clouds, to upstream developers, product
managers, researchers, and every combination thereof. In the past year
we reorganized the Design Summit event, so that the design / planning /
feedback gathering part of it would be less dev- or ops-branded, to
encourage participation of everyone in a neutral ground, based on the
topic being discussed. That was just a first step.

In OpenStack we have a number of "working groups", groups of people
interested in discussing a given use case, or addressing a given problem
space across all of OpenStack. Examples include the API working group,
the Deployment working group, the Public clouds working group, the
Telco/NFV working group, or the Scientific working group. However, for
governance reasons, those are currently set up either as a User
Committee working group[1], or a working group depending on the
Technical Committee[2]. This branding of working groups artificially
discourages participation from one side to the others group, for no
specific reason. This needs to be fixed.

We propose to take a page out of Kubernetes playbook and set up "SIGs"
(special interest groups), that would be primarily defined by their
mission (i.e. the use case / problem space the group wants to
collectively address). Those SIGs would not be Ops SIGs or Dev SIGs,
they would just be OpenStack SIGs. While possible some groups will lean
more towards an operator or dev focus (based on their mission), it is
important to encourage everyone to join in early and often. SIGs could
be very easily set up, just by adding your group to a wiki page,
defining the mission of the group, a contact point and details on
meetings (if the group has any). No need for prior vetting by any
governance body. The TC and UC would likely still clean up dead SIGs
from the list, to keep it relevant and tidy. Since they are neither dev
or ops, SIGs would not use the -dev or the -operators lists: they would
use a specific ML (openstack-sigs ?) to hold their discussions without
cross-posting, with appropriate subject tagging.

Not everything would become a SIG. Upstream project teams would remain
the same (although some of them, like Security, might turn into a SIG).
Teams under the UC that are purely operator-facing (like the Ops Tags
Team or the AUC recognition team) would likewise stay as UC subteams.

Comments, thoughts ?

[1]
https://wiki.openstack.org/wiki/Governance/Foundation/UserCommittee#Working_Groups_and_Teams
[2] https://wiki.openstack.org/wiki/Upstream_Working_Groups

-- 
Melvin Hillsman & Thierry Carrez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] Add API Sample tests for Placement APIs

2017-06-21 Thread Matt Riedemann

On 6/21/2017 7:04 AM, Shewale, Bhagyashri wrote:
I  would like to write functional tests to check the exact req/resp for 
each placement API for all supported versions similar


to what is already done for other APIs under 
nova/tests/functional/api_sample_tests/api_samples/*.


These request/response json samples can be used by the api.openstack.org 
and in the manuals.


There are already functional tests written for placement APIs under 
nova/tests/functional/api/openstack/placement,


but these tests doesn’t check the entire HTTP response for each API for 
all supported versions.


I think adding such functional tests for checking response for each 
placement API would be beneficial to the project.


If there is an interest to create such functional tests, I can file a 
new blueprint for this activity.




This has come up before and we don't want to use the same functional API 
samples infrastructure for generating API samples for the placement API. 
The functional API samples tests are confusing and a steep learning 
curve for new contributors (and even long time old tooth contributors 
still get confused by them).


Talk with Chris Dent about ideas here for API samples with placement. 
He's talked about building something into the gabbi library for this, 
but I don't know if that's being worked on or not.


Chris is also on vacation for a couple of weeks, just FYI.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Required Ceph rbd image features

2017-06-21 Thread Chris MacNaughton
Hello,

I'm working on identifying the required RBD image features to be compatible
with both the nova-kvm and nova-lxd drivers. The requirement derives from
nova-lxd using the kernel driver for Ceph, while nova-kvm handles rbd
through userspace.

Thank you,
Chris MacNaughton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Acceleration]Cyborg Weekly Meeting 2017.06.21

2017-06-21 Thread Zhipeng Huang
Hi team,

A kind reminder for today's meeting about 20mins later on
#openstack-cyborg, agenda at
https://wiki.openstack.org/wiki/Meetings/CyborgTeamMeeting#Agenda_for_next_meeting


-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FEMDC][MassivelyDistributed] Strawman proposal for message bus analysis

2017-06-21 Thread Ilya Shakhat
Hi Ken,

Please check scenarios and reports that exist in Performance Docs. In
particular you may be interested in:
 * O.M.Simulator -
https://github.com/openstack/oslo.messaging/blob/master/tools/simulator.py
 * MQ  performance scenario -
https://docs.openstack.org/developer/performance-docs/test_plans/mq/plan.html#message-queue-performance
 * One of RabbitMQ reports -
https://docs.openstack.org/developer/performance-docs/test_results/mq/rabbitmq/cmsm/index.html
 * MQ HA scenario -
https://docs.openstack.org/developer/performance-docs/test_plans/mq_ha/plan.html
 * One of RabbitMQ HA reports -
https://docs.openstack.org/developer/performance-docs/test_results/mq_ha/rabbitmq-ha-queues/cs1ss2-ks2-ha/omsimulator-ha-call-cs1ss2-ks2-ha/index.html


Thanks,
Ilya

2017-06-21 15:23 GMT+02:00 Ken Giusti :

> Hi All,
>
> Andy and I have taken a stab at defining some test scenarios for anal the
> different message bus technologies:
>
> https://etherpad.openstack.org/p/1BGhFHDIoi
>
> We've started with tests for just the oslo.messaging layer to analyze
> throughput and latency as the number of message bus clients - and the bus
> itself - scale out.
>
> The next step will be to define messaging oriented test scenarios for an
> openstack deployment.  We've started by enumerating a few of the tools,
> topologies, and fault conditions that need to be covered.
>
> Let's use this epad as a starting point for analyzing messaging - please
> feel free to contribute, question, and criticize :)
>
> thanks,
>
>
>
> --
> Ken Giusti  (kgiu...@gmail.com)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-21 Thread Davanum Srinivas
On Wed, Jun 21, 2017 at 1:52 AM, Thierry Carrez  wrote:
> Zane Bitter wrote:
>> [...]
>> Until then it seems to me that the tradeoff is between decoupling it
>> from the particular cloud it's running on so that users can optionally
>> deploy it standalone (essentially Vish's proposed solution for the *aaS
>> services from many moons ago) vs. decoupling it from OpenStack in
>> general so that the operator has more flexibility in how to deploy.
>>
>> I'd love to be able to cover both - from a user using it standalone to
>> spin up and manage a DB in containers on a shared PaaS, through to a
>> user accessing it as a service to provide a DB running on a dedicated VM
>> or bare metal server, and everything in between. I don't know is such a
>> thing is feasible. I suspect we're going to have to talk a lot about VMs
>> and network plumbing and volume storage :)
>
> As another data point, we are seeing this very same tradeoff with Magnum
> vs. Tessmaster (with "I want to get a Kubernetes cluster" rather than "I
> want to get a database").
>
> Tessmaster is the user-side tool from EBay deploying Kubernetes on
> different underlying cloud infrastructures: takes a bunch of cloud
> credentials, then deploys, grows and shrinks Kubernetes cluster for you.
>
> Magnum is the infrastructure-side tool from OpenStack giving you
> COE-as-a-service, through a provisioning API.
>
> Jay is advocating for Trove to be more like Tessmaster, and less like
> Magnum. I think I agree with Zane that those are two different approaches:
>
> From a public cloud provider perspective serving lots of small users, I
> think a provisioning API makes sense. The user in that case is in a
> "black box" approach, so I think the resulting resources should not
> really be accessible as VMs by the tenant, even if they end up being
> Nova VMs. The provisioning API could propose several options (K8s or
> Mesos, MySQL or PostgreSQL).

I like this! ^^ If we can pull off "different underlying cloud
infrastructures" like TessMaster, that would be of more value to folks
who may not be using OpenStack (or VMs!)


>
> From a private cloud / hybrid cloud / large cloud user perspective, the
> user-side deployment tool, letting you deploy the software on various
> types of infrastructure, probably makes more sense. It's probably more
> work to run it, but you gain in flexibility. That user-side tool would
> probably not support multiple options, but be application-specific.
>
> So yes, ideally we would cover both. Because they target different
> users, and both are right...
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ci] where to find the CI backlog and issues we're tracking

2017-06-21 Thread Wesley Hayutin
On Tue, Jun 20, 2017 at 2:51 PM, Emilien Macchi  wrote:

> On Tue, Jun 20, 2017 at 12:49 PM, Wesley Hayutin 
> wrote:
> > Greetings,
> >
> > It's become apparent that everyone in the tripleo community may not be
> aware
> > of where CI specific work is tracked.
> >
> > To find out which CI related features or bug fixes are in progress or to
> see
> > the backlog please consult [1].
> >
> > To find out what issues have been found in OpenStack via CI please
> consult
> > [2].
> >
> > Thanks!
>
> Thanks Wes for these informations. I was about to start adding more
> links and informations when I realized monitoring TripleO CI might
> deserve a little bit of training and documentation.
> I'll take some time this week to create a new section in TripleO docs
> with useful informations that we can easily share with our community
> so everyone can learn how to be aware about CI status.
>
>
Emilien,
That's a really good point, we should have this information in doc.
You are a busy guy, we'll take care of that.

Thanks for the input!


>
> >
> > [1] https://trello.com/b/U1ITy0cu/tripleo-ci-squad
> > [2] https://trello.com/b/WXJTwsuU/tripleo-and-rdo-ci-status
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [FEMDC][MassivelyDistributed] Strawman proposal for message bus analysis

2017-06-21 Thread Ken Giusti
Hi All,

Andy and I have taken a stab at defining some test scenarios for anal the
different message bus technologies:

https://etherpad.openstack.org/p/1BGhFHDIoi

We've started with tests for just the oslo.messaging layer to analyze
throughput and latency as the number of message bus clients - and the bus
itself - scale out.

The next step will be to define messaging oriented test scenarios for an
openstack deployment.  We've started by enumerating a few of the tools,
topologies, and fault conditions that need to be covered.

Let's use this epad as a starting point for analyzing messaging - please
feel free to contribute, question, and criticize :)

thanks,



-- 
Ken Giusti  (kgiu...@gmail.com)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Stepping down from core

2017-06-21 Thread Erno Kuvaja
On Tue, Jun 20, 2017 at 11:07 AM, Flavio Percoco  wrote:
> On 20/06/17 09:31 +1200, feilong wrote:
>>
>> Hi there,
>>
>> I've been a Glance core since 2013 and been involved in the Glance
>> community even longer, so I care deeply about Glance. My situation right now
>> is such that I cannot devote sufficient time to Glance, and while as you've
>> seen elsewhere on the mailing list, Glance needs reviewers, I'm afraid that
>> keeping my name on the core list is giving people a false impression of how
>> dire the current Glance personnel situation is. So after discussed with
>> Glance PTL, I'd like to offer my resignation as a member of the Glance core
>> reviewer team. Thank you for your understanding.
>
>
> Thanks for being honest and open about the situation. I agree with you that
> this
> is the right move.
>
> I'd like to thank you for all these years of service and I think it goes
> without
> saying that you're welcome back in the team anytime you want.
>
> Flavio
>
> --
> @flaper87
> Flavio Percoco
>


Just want to reinforce what Flavio said. Big thanks for all your time
and expertise! You're always welcome back if your time so permits.

- Erno

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][designate][bind9] Looking for ways to limit users to adding hosts within fixed personal domain

2017-06-21 Thread Graham Hayes
On 21/06/17 13:26, Lawrence J. Albinson wrote:
> Hi Graham,
> 
> Many thank for your prompt reply; your suggestion is spot on for my current 
> use case. Again, thanks.
> 
> On another note, I see that designate has zone blacklisting that could be 
> used to limit the names of newly created zones using a negative regex. But 
> there is no zone whitelisting. Is there a reason for this?

No particular reason - the use case for blacklists was when we were
running it in a public cloud - we wanted to stop users from creating
zones that could be interpreted as "offical".

We have a "tld" feature which could be used as a sudo whitelist - as
long as you want to restrict users to subdomains of a few pre-decided
zones.

e.g. setting tlds of "cloud.example.com." and "internal.example.com."
will mean that users can only create *.(cloud|internal).example.com.

Thanks,

- Graham


> Kind regards, Lawrence
> 
> Lawrence J Albinson
> 
> From: Graham Hayes
> Sent: 20 June 2017 13:01
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [openstack-ansible][designate][bind9] Looking 
> for ways to limit users to adding hosts within fixed personal domain
> 
> On 20/06/17 12:37, Lawrence J. Albinson wrote:
>> I am trying to find pointers to how I might limit non-privileged users
>> to a single domain when adding hosts to Designate.
>>
>> It is a private OpenStack cloud and each user will have a personal
>> sub-domain of a common organisational domain, like so:
>> fred.organisation.com. and will be able to add hosts such as:
>> www.fred.organisation.com.  .
>>
>> (The designate back-end is Bind9.)
>>
>> Any pointers about how to do this would be very gratefully received.
>>
>> Kind regards, Lawrence
>>
>> Lawrence J Albinson
> 
> Sure - there are a few ways to do this, but the simplest would be the
> following:
> 
> (I am assuming the zone is pre-created by the admin when provisioning
> the project)
> 
> In the policy.json file we have controls for what users can do to zones
> [1]
> 
> I would suggest changing
> 
> `create_zone`, `delete_zone`, and `update_zone` to `rule:admin`
> 
> then the admin can create the zone by running
> 
> `openstack zone create --sudo-project-id  --email
> t...@example.com subdomain.example.com.`
> 
> And the zone should be created in the project, and they will have full
> control of the recordsets inside that zone.
> 
> If that does not work, we support "zone transfers"[2] (its a terrible
> name) where the admin can create the new sub zone in the admin project
> and then transfer ownership to the new project.
> 
> 1 -
> https://github.com/openstack/designate/blob/master/etc/designate/policy.json#L43-L56
> 
> 2 -
> https://docs.openstack.org/developer/python-designateclient/shell-v2-examples.html#working-with-zone-transfer
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 



0x23BA8E2E.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][designate][bind9] Looking for ways to limit users to adding hosts within fixed personal domain

2017-06-21 Thread Lawrence J. Albinson
Hi Graham,

Many thank for your prompt reply; your suggestion is spot on for my current use 
case. Again, thanks.

On another note, I see that designate has zone blacklisting that could be used 
to limit the names of newly created zones using a negative regex. But there is 
no zone whitelisting. Is there a reason for this?

Kind regards, Lawrence

Lawrence J Albinson

From: Graham Hayes
Sent: 20 June 2017 13:01
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [openstack-ansible][designate][bind9] Looking for 
ways to limit users to adding hosts within fixed personal domain

On 20/06/17 12:37, Lawrence J. Albinson wrote:
> I am trying to find pointers to how I might limit non-privileged users
> to a single domain when adding hosts to Designate.
>
> It is a private OpenStack cloud and each user will have a personal
> sub-domain of a common organisational domain, like so:
> fred.organisation.com. and will be able to add hosts such as:
> www.fred.organisation.com.  .
>
> (The designate back-end is Bind9.)
>
> Any pointers about how to do this would be very gratefully received.
>
> Kind regards, Lawrence
>
> Lawrence J Albinson

Sure - there are a few ways to do this, but the simplest would be the
following:

(I am assuming the zone is pre-created by the admin when provisioning
the project)

In the policy.json file we have controls for what users can do to zones
[1]

I would suggest changing

`create_zone`, `delete_zone`, and `update_zone` to `rule:admin`

then the admin can create the zone by running

`openstack zone create --sudo-project-id  --email
t...@example.com subdomain.example.com.`

And the zone should be created in the project, and they will have full
control of the recordsets inside that zone.

If that does not work, we support "zone transfers"[2] (its a terrible
name) where the admin can create the new sub zone in the admin project
and then transfer ownership to the new project.

1 -
https://github.com/openstack/designate/blob/master/etc/designate/policy.json#L43-L56

2 -
https://docs.openstack.org/developer/python-designateclient/shell-v2-examples.html#working-with-zone-transfer
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][designate] Recommended way to inject the rndc.key into the designate container when using Bind9

2017-06-21 Thread Lawrence J. Albinson
Hi Andy,

As ever, many thanks for the prompt reply. I'm going to work up a patch 
proposal and submit it as soon as possible.

Kind regards, Lawrence


From: Andy McCrae
Sent: 20 June 2017 14:15
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [openstack-ansible][designate] Recommended way to 
inject the rndc.key into the designate container when using Bind9

Hi Lawrence,

Thanks for providing the feedback!

I am using OpenStack Designate with Bind9 as the slave and have managed to set 
it up with openstack-ansible in all respect bar one, I am unable to 
automatically inject the rndc.key file into the Designate container.
Is there a recognised way to do this (and similar things elsewhere across the 
OpenStack family) within the openstack-ansible framework without branching the 
repo and making modifications?

We don't currently have a set way to do that, although after talking with 
Graham and few others, it seems this is something the designate role should do, 
so I'd label that a bug. That said, rather than having a fork with 
modifications it seems
like useful functionality that would be useful to most deployers of Designate, 
so it would be great to create a patch to add this functionality. I'm imagining 
it would just be a templated rndc.key file with a "designate_rndc_key_value" 
variable
(or something along those lines!).

If that sounds like something you'd like to give a go there is some good 
documentation around what to do to get started here: 
https://docs.openstack.org/infra/manual/developers.html

Also! Feel free to jump into the #openstack-ansible channel on Freenode irc, 
we're a pretty helpful bunch, and we'd love to help you get involved.

Hopefully that helps!
Andy


WIth apologies in advance in the event that I have overlooked the essential 
piece of documentation on how to do this.

Kind regards, Lawrence
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] A proposal for hackathon to reduce deploy time of TripleO

2017-06-21 Thread Emilien Macchi
Reminder: the sprint is starting today:

- /join #openstack-sprint
- tag relevant bugs in Launchpad to be found on
https://bugs.launchpad.net/tripleo/+bugs?field.tag=deployment-time
- create new bugs related to performances and tag them.

Anyone is welcome to join the sprint, feel free to ping me or Sagi if
any question or feedback,

Thanks!

On Mon, Jun 19, 2017 at 5:30 PM, Emilien Macchi  wrote:
> Reminder: the sprint will start on Wednesday of this week.
>
> Actions required:
>
> - /join #openstack-sprint
> - tag relevant bugs in Launchpad to be found on
> https://bugs.launchpad.net/tripleo/+bugs?field.tag=deployment-time
> - create new bugs related to performances and tag them.
>
> Anyone is welcome to join the sprint, feel free to ping me or Sagi if
> any question or feedback,
>
> Thanks!
>
> On Thu, Jun 8, 2017 at 10:44 AM, Emilien Macchi  wrote:
>> On Thu, Jun 8, 2017 at 4:19 PM, Sagi Shnaidman  wrote:
>>> Hi, all
>>>
>>> Thanks for your attention and proposals for this hackathon. With full
>>> understanding that optimization of deployment is on-going effort and should
>>> not be started and finished in these 2 days only, we still want to get focus
>>> on these issues in the sprint. Even if we don't solve immediately all
>>> problems, more people will be exposed to this field, additional tasks/bugs
>>> could be opened and scheduled, and maybe additional tests, process
>>> improvements and other insights will be introduced.
>>> If we don't reduce ci job time to 1 hour in Thursday it doesn't mean we
>>> failed the mission, please remember.
>>> The main goal of this sprint is to find problems and their work scope, and
>>> to find as many as possible solutions for them, using inter-team and team
>>> members collaboration and sharing knowledge. Ideally this collaboration and
>>> on-going effort will go further with such momentum. :)
>>>
>>> I suggest to do it in 21 - 22 Jun 2017 (Wednesday - Thursday). All other
>>> details are provided in etherpad:
>>> https://etherpad.openstack.org/p/tripleo-deploy-time-hack and in wiki as
>>> well: https://wiki.openstack.org/wiki/VirtualSprints
>>> We have a "deployment-time" tag for bugs:
>>> https://bugs.launchpad.net/tripleo/+bugs?field.tag=deployment-time Please
>>> use it for bugs that affect deployment time or CI job run time. It will be
>>> easier to handle them in the sprint.
>>>
>>> Please provide your comments and suggestions.
>>
>> Thanks Sagi for bringing this up, this is really awesome.
>> One thing we could do to make this sprint productive is to report /
>> triage Launchpad bugs related to $topic so we have a list of things we
>> can work on during these 2 days.
>>
>> Maybe we could go through:
>> https://launchpad.net/tripleo/+milestone/pike-2
>> https://launchpad.net/tripleo/+milestone/pike-3 and add
>> deployment-time to all the bugs we think it's related to performances.
>>
>> Once we have the list, we'll work on them by priority and by area of 
>> knowledge.
>>
>> Also, folks like face to face interactions. We'll take care of
>> preparing an open Bluejeans where folks can easily join and ask
>> questions. We'll probably be connected all day, so anyone can join
>> anytime. No schedule constraint here.
>>
>> Any feedback is welcome,
>>
>> Thanks!
>>
>>> Thanks
>>>
>>>
>>>
>>> On Tue, May 23, 2017 at 1:47 PM, Sagi Shnaidman  wrote:

 Hi, all

 I'd like to propose an idea to make one or two days hackathon in TripleO
 project with main goal - to reduce deployment time of TripleO.

 - How could it be arranged?

 We can arrange a separate IRC channel and Bluejeans video conference
 session for hackathon in these days to create a "presence" feeling.

 - How to participate and contribute?

 We'll have a few responsibility fields like tripleo-quickstart,
 containers, storage, HA, baremetal, etc - the exact list should be ready
 before the hackathon so that everybody could assign to one of these 
 "teams".
 It's good to have somebody in team to be stakeholder and responsible for
 organization and tasks.

 - What is the goal?

 The goal of this hackathon to reduce deployment time of TripleO as much as
 possible.

 For example part of CI team takes a task to reduce quickstart tasks time.
 It includes statistics collection, profiling and detection of places to
 optimize. After this tasks are created, patches are tested and submitted.

 The prizes will be presented to teams which saved most of time :)

 What do you think?

 Thanks
 --
 Best regards
 Sagi Shnaidman
>>>
>>>
>>>
>>>
>>> --
>>> Best regards
>>> Sagi Shnaidman
>>
>>
>>
>> --
>> Emilien Macchi
>
>
>
> --
> Emilien Macchi



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage 

[openstack-dev] [nova][placement] Add API Sample tests for Placement APIs

2017-06-21 Thread Shewale, Bhagyashri
Hi nova devs,

I  would like to write functional tests to check the exact req/resp for each 
placement API for all supported versions similar
to what is already done for other APIs under 
nova/tests/functional/api_sample_tests/api_samples/*.
These request/response json samples can be used by the api.openstack.org and in 
the manuals.

There are already functional tests written for placement APIs under 
nova/tests/functional/api/openstack/placement,
but these tests doesn't check the entire HTTP response for each API for all 
supported versions.

I think adding such functional tests for checking response for each placement 
API would be beneficial to the project.
If there is an interest to create such functional tests, I can file a new 
blueprint for this activity.


Regards,
Bhagyashri Shewale

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] nominating Abhishek Kekane for glance core

2017-06-21 Thread Kekane, Abhishek
Thank you all for your positive responses.

I will definitely put 100% efforts to justify my selection as core.

Best Regards,

Abhishek Kekane
 


-Original Message-
From: Brian Rosmaita [mailto:rosmaita.foss...@gmail.com] 
Sent: Wednesday, June 21, 2017 4:47 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [glance] nominating Abhishek Kekane for glance core

Having heard only affirmative responses, I've added Abhishek Kekane to the 
Glance core group, with all the rights and privileges pertaining thereto.

Welcome to the Glance core team, Abhishek!

cheers,
brian

On Tue, Jun 20, 2017 at 1:35 PM, Mikhail Fedosin  wrote:
> Wasn't Abhishek a glance core before? What a surprise for me o_O I 
> thought that he was just being modest and did not put -2 on the patches.
>
> Undoubtedly, we need to correct this misunderstanding as quickly as 
> possible and invite Abhishek to the core team.
>
> On Mon, Jun 19, 2017 at 5:40 PM, Erno Kuvaja  wrote:
>>
>> On Fri, Jun 16, 2017 at 3:26 PM, Brian Rosmaita 
>>  wrote:
>> > I'm nominating Abhishek Kekane (abhishekk on IRC) to be a Glance 
>> > core for the Pike cycle.  Abhishek has been around the Glance 
>> > community for a long time and is familiar with the architecture and 
>> > design patterns used in Glance and its related projects.  He's 
>> > contributed code, triaged bugs, provided bugfixes, and done quality 
>> > reviews for Glance.
>> >
>> > Abhishek has been proposed for Glance core before, but some members 
>> > of the community were concerned that he wasn't able to devote 
>> > sufficient time to Glance.  Given the current situation with the 
>> > project, however, it would be an enormous help to have someone as 
>> > knowledgeable about Glance as Abhishek to have +2 powers.  I 
>> > discussed this with Abhishek, he's aware that some in the community 
>> > have that concern, and he's agreed to be a core reviewer for the 
>> > Pike cycle.  The community can revisit his status early in Queens.
>> >
>> > Now that I've written that down, that puts Abhishek in the same 
>> > boat as all core reviewers, i.e., their levels of participation and 
>> > commitment are assessed at the beginning of each cycle and 
>> > adjustments made.
>> >
>> > In any case, I'd like to put Abhishek to work as soon as possible!  
>> > So please reply to this message with comments or concerns before 
>> > 23:59 UTC on Monday 19 June.  I'd like to confirm Abhishek as a 
>> > core on Tuesday 20 June.
>> >
>> > thanks,
>> > brian
>> >
>>
>> +2 from me! This sounds like a great solution for our immediate
>> staffing issues and I'm happy to hear Abhishek would have the cycles 
>> to help us. Lets hope we get to enjoy his knowledge and good quality 
>> reviews on many cycles forward.
>>
>> - Erno
>>
>> >
>> > ___
>> > ___ OpenStack Development Mailing List (not for usage 
>> > questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> _
>> _ OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] nominating Abhishek Kekane for glance core

2017-06-21 Thread Brian Rosmaita
Having heard only affirmative responses, I've added Abhishek Kekane to
the Glance core group, with all the rights and privileges pertaining
thereto.

Welcome to the Glance core team, Abhishek!

cheers,
brian

On Tue, Jun 20, 2017 at 1:35 PM, Mikhail Fedosin  wrote:
> Wasn't Abhishek a glance core before? What a surprise for me o_O
> I thought that he was just being modest and did not put -2 on the patches.
>
> Undoubtedly, we need to correct this misunderstanding as quickly as
> possible and invite Abhishek to the core team.
>
> On Mon, Jun 19, 2017 at 5:40 PM, Erno Kuvaja  wrote:
>>
>> On Fri, Jun 16, 2017 at 3:26 PM, Brian Rosmaita
>>  wrote:
>> > I'm nominating Abhishek Kekane (abhishekk on IRC) to be a Glance core
>> > for the Pike cycle.  Abhishek has been around the Glance community for
>> > a long time and is familiar with the architecture and design patterns
>> > used in Glance and its related projects.  He's contributed code,
>> > triaged bugs, provided bugfixes, and done quality reviews for Glance.
>> >
>> > Abhishek has been proposed for Glance core before, but some members of
>> > the community were concerned that he wasn't able to devote sufficient
>> > time to Glance.  Given the current situation with the project,
>> > however, it would be an enormous help to have someone as knowledgeable
>> > about Glance as Abhishek to have +2 powers.  I discussed this with
>> > Abhishek, he's aware that some in the community have that concern, and
>> > he's agreed to be a core reviewer for the Pike cycle.  The community
>> > can revisit his status early in Queens.
>> >
>> > Now that I've written that down, that puts Abhishek in the same boat
>> > as all core reviewers, i.e., their levels of participation and
>> > commitment are assessed at the beginning of each cycle and adjustments
>> > made.
>> >
>> > In any case, I'd like to put Abhishek to work as soon as possible!  So
>> > please reply to this message with comments or concerns before 23:59
>> > UTC on Monday 19 June.  I'd like to confirm Abhishek as a core on
>> > Tuesday 20 June.
>> >
>> > thanks,
>> > brian
>> >
>>
>> +2 from me! This sounds like a great solution for our immediate
>> staffing issues and I'm happy to hear Abhishek would have the cycles
>> to help us. Lets hope we get to enjoy his knowledge and good quality
>> reviews on many cycles forward.
>>
>> - Erno
>>
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][horizon][fwaas][vpnaas] fwaas/vpnaas dashboard split out

2017-06-21 Thread Akihiro Motoki
First of all, IMHO this approach is not a requirement.
Each project can choose their preferred way.

# Hopefully all dashboards are listed in the horizon plugin registry
if you have a dashboard support.
# https://docs.openstack.org/developer/horizon/install/plugin-registry.html

The following is my thought and reason I chose a separate project for
fwaas/vpnaas dashboards. We just follow what most horizon plugins do.

I believe translation support in the dashboard is one of
important aspects. As my hat of I18 team, our rough user survey shows us
that the dashboard is the area users need translations most.
The translation script in our infra supports only a separate project model
(I am one of the author of the current script) and at now we have no plan to
enhance it unless some new volunteer who wants to explore it.
In case of the FWaaS/VPNaaS dashboard, we already support translation now
and would like to keep it without exploring a new way.

I don't talk about testing much here. It is a straight forward topic.

Another point is from the deployment perspective.
Deployers might want not to install unnecessary things. Neutron and horizon may
deploy different servers and they may not install unrelated stuff.
It also affect distro packaging. Packages may need to create a separate package
from one source: one for neutron plugin and the other for horizon plugin.

This is my view I personally prefer to a separate repository.

Thanks,
Akihiro

2017-06-21 19:07 GMT+09:00 Thomas Morin :
> Kevin Benton :
>> Some context here:
>> http://lists.openstack.org/pipermail/openstack-dev/2017-April/115200.html
>>
>
> Thanks, I had missed this one.
>
> So, what I gather is that the only drawback noted for "(b) dashboard code in
> individual project" is "Requires extra efforts to support neutron and
> horizon codes in a single repository for testing and translation supports.
> Each project needs to explore the way.".   While I won't disagree, my
> question would be the following: since we have something that works (except
> dashboard translation, which we haven't explored), should we move to the
> model agreed on for new work (given that there is also overhead in creating
> and maintaining a new repo) ?
>
> Best,
>
> -Thomas
>
>
> On Wed, Jun 21, 2017 at 2:33 AM, Thomas Morin 
> wrote:
>>
>> Hi Akihiro,
>>
>> While I understand the motivation to move these dashboards from
>> openstack/horizon, what is the reason to prefer a distinct repo for the
>> dashboard rather than hosting it in the main repo of these projects ?
>>
>> (networking-bgpvpn has had a dashboard for some time already, it is hosted
>> under networking-bgpvpn/bgpvpn_dashboard and we haven't heard about any
>> drawback)
>>
>> Thanks,
>>
>> -Thomas
>>
>>
>>
>> Akihiro Motoki :
>>
>> Hi neutron and horizon teams (especially fwaas and vpnaas folks),
>>
>> As we discussed so far, I prepared separate git repositories for FWaaS
>> and VPNaaS dashboards.
>> http://git.openstack.org/cgit/openstack/neutron-fwaas-dashboard/
>> http://git.openstack.org/cgit/openstack/neutron-vpnaas-dashboard/
>>
>> All new features will be implemented in the new repositories, for
>> example, FWaaS v2 support.
>> The initial core members consist of neutron-fwaas/vpnaas-core
>> (respectively) + horizon-core.
>>
>> There are several things to do to complete the split out.
>> I gathered a list of work items at the etherpad and we will track the
>> progress here.
>> https://etherpad.openstack.org/p/horizon-fwaas-vpnaas-splitout
>> If you are interested in helping the efforts, sign up on the etherpad
>> or contact me.
>>
>> I would like to release the initial release which is compatible with
>> the current horizon
>> FWaaS/VPNaaS dashboard (with no new features).
>> I hope we can release it around R-8 week (Jul 3) or R-7 (Jul 10).
>>
>> It also will be good examples for neutron stadium/related projects
>> which are interested in
>> adding dashboard support. AFAIK, networking-sfc, tap-as-a-service are
>> interested in it.
>>
>> Thanks,
>> Akihiro
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

Re: [openstack-dev] [nova][scheduler][placement] Trying to understand the proposed direction

2017-06-21 Thread Sean Dague
On 06/21/2017 04:43 AM, sfinu...@redhat.com wrote:
> On Tue, 2017-06-20 at 16:48 -0600, Chris Friesen wrote:
>> On 06/20/2017 09:51 AM, Eric Fried wrote:
>>> Nice Stephen!
>>>
>>> For those who aren't aware, the rendered version (pretty, so pretty) can
>>> be accessed via the gate-nova-docs-ubuntu-xenial jenkins job:
>>>
>>> http://docs-draft.openstack.org/10/475810/1/check/gate-nova-docs-ubuntu-xen
>>> ial/25e5173//doc/build/html/scheduling.html?highlight=scheduling
>>
>> Can we teach it to not put line breaks in the middle of words in the text
>> boxes?
> 
> Doesn't seem configurable in its current form :( This, and the defaulting to
> PNG output instead of SVG (which makes things ungreppable) are my biggest bug
> bear.
> 
> I'll go have a look at the sauce and see what can be done about it. If not,
> still better than nothing?

I've actually looked through the blockdiag source (to try to solve a
similar problem). There is no easy way to change it.

If people find it confusing, the best thing to do would be short labels
on boxes, then explain in more detail in footnotes.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] realtime kvm cpu affinities

2017-06-21 Thread Henning Schild
Am Tue, 20 Jun 2017 10:04:30 -0400
schrieb Luiz Capitulino :

> On Tue, 20 Jun 2017 09:48:23 +0200
> Henning Schild  wrote:
> 
> > Hi,
> > 
> > We are using OpenStack for managing realtime guests. We modified
> > it and contributed to discussions on how to model the realtime
> > feature. More recent versions of OpenStack have support for
> > realtime, and there are a few proposals on how to improve that
> > further.
> > 
> > But there is still no full answer on how to distribute threads
> > across host-cores. The vcpus are easy but for the emulation and
> > io-threads there are multiple options. I would like to collect the
> > constraints from a qemu/kvm perspective first, and than possibly
> > influence the OpenStack development
> > 
> > I will put the summary/questions first, the text below provides more
> > context to where the questions come from.
> > - How do you distribute your threads when reaching the really low
> >   cyclictest results in the guests? In [3] Rik talked about problems
> >   like hold holder preemption, starvation etc. but not where/how to
> >   schedule emulators and io  
> 
> We put emulator threads and io-threads in housekeeping cores in
> the host. I think housekeeping cores is what you're calling
> best-effort cores, those are non-isolated cores that will run host
> load.

As expected, any best-effort/housekeeping core will do but overlap with
the vcpu-cores is a bad idea.

> > - Is it ok to put a vcpu and emulator thread on the same core as
> > long as the guest knows about it? Any funny behaving guest, not
> > just Linux.  
> 
> We can't do this for KVM-RT because we run all vcpu threads with
> FIFO priority.

Same point as above, meaning the "hw:cpu_realtime_mask" approach is
wrong for realtime.

> However, we have another project with DPDK whose goal is to achieve
> zero-loss networking. The configuration required by this project is
> very similar to the one required by KVM-RT. One difference though is
> that we don't use RT and hence don't use FIFO priority.
> 
> In this project we've been running with the emulator thread and a
> vcpu sharing the same core. As long as the guest housekeeping CPUs
> are idle, we don't get any packet drops (most of the time, what
> causes packet drops in this test-case would cause spikes in
> cyclictest). However, we're seeing some packet drops for certain
> guest workloads which we are still debugging.

Ok but that seems to be a different scenario where hw:cpu_policy
dedicated should be sufficient. However if the placement of the io and
emulators has to be on a subset of the dedicated cpus something like
hw:cpu_realtime_mask would be required.

> > - Is it ok to make the emulators potentially slow by running them on
> >   busy best-effort cores, or will they quickly be on the critical
> > path if you do more than just cyclictest? - our experience says we
> > don't need them reactive even with rt-networking involved  
> 
> I believe it is ok.

Ok.
 
> > Our goal is to reach a high packing density of realtime VMs. Our
> > pragmatic first choice was to run all non-vcpu-threads on a shared
> > set of pcpus where we also run best-effort VMs and host load.
> > Now the OpenStack guys are not too happy with that because that is
> > load outside the assigned resources, which leads to quota and
> > accounting problems.
> > 
> > So the current OpenStack model is to run those threads next to one
> > or more vcpu-threads. [1] You will need to remember that the vcpus
> > in question should not be your rt-cpus in the guest. I.e. if vcpu0
> > shares its pcpu with the hypervisor noise your preemptrt-guest
> > would use isolcpus=1.
> > 
> > Is that kind of sharing a pcpu really a good idea? I could imagine
> > things like smp housekeeping (cache invalidation etc.) to eventually
> > cause vcpu1 having to wait for the emulator stuck in IO.  
> 
> Agreed. IIRC, in the beginning of KVM-RT we saw a problem where
> running vcpu0 on an non-isolated core and without FIFO priority
> caused spikes in vcpu1. I guess we debugged this down to vcpu1
> waiting a few dozen microseconds for vcpu0 for some reason. Running
> vcpu0 on a isolated core with FIFO priority fixed this (again, this
> was years ago, I won't remember all the details).
> 
> > Or maybe a busy polling vcpu0 starving its own emulator causing high
> > latency or even deadlocks.  
> 
> This will probably happen if you run vcpu0 with FIFO priority.

Two more points that indicate that hw:cpu_realtime_mask (putting
emulators/io next to any vcpu) does not work for general rt.

> > Even if it happens to work for Linux guests it seems like a strong
> > assumption that an rt-guest that has noise cores can deal with even
> > more noise one scheduling level below.
> > 
> > More recent proposals [2] suggest a scheme where the emulator and io
> > threads are on a separate core. That sounds more reasonable /
> > conservative but dramatically increases the per VM cost. 

Re: [openstack-dev] [neutron][horizon][fwaas][vpnaas] fwaas/vpnaas dashboard split out

2017-06-21 Thread Thomas Morin

Kevin Benton :
> Some context here: 
http://lists.openstack.org/pipermail/openstack-dev/2017-April/115200.html

>

Thanks, I had missed this one.

So, what I gather is that the only drawback noted for "(b) dashboard 
code in individual project" is "Requires extra efforts to support 
neutron and horizon codes in a single repository for testing and 
translation supports. Each project needs to explore the way.".   While I 
won't disagree, my question would be the following: since we have 
something that works (except dashboard translation, which we haven't 
explored), should we move to the model agreed on for new work (given 
that there is also overhead in creating and maintaining a new repo) ?


Best,

-Thomas


On Wed, Jun 21, 2017 at 2:33 AM, Thomas Morin > wrote:


Hi Akihiro,

While I understand the motivation to move these dashboards from
openstack/horizon, what is the reason to prefer a distinct repo
for the dashboard rather than hosting it in the main repo of these
projects ?

(networking-bgpvpn has had a dashboard for some time already, it
is hosted under networking-bgpvpn/bgpvpn_dashboard and we haven't
heard about any drawback)

Thanks,

-Thomas



Akihiro Motoki :

Hi neutron and horizon teams (especially fwaas and vpnaas folks),

As we discussed so far, I prepared separate git repositories for FWaaS
and VPNaaS dashboards.
http://git.openstack.org/cgit/openstack/neutron-fwaas-dashboard/

http://git.openstack.org/cgit/openstack/neutron-vpnaas-dashboard/


All new features will be implemented in the new repositories, for
example, FWaaS v2 support.
The initial core members consist of neutron-fwaas/vpnaas-core
(respectively) + horizon-core.

There are several things to do to complete the split out.
I gathered a list of work items at the etherpad and we will track the
progress here.
https://etherpad.openstack.org/p/horizon-fwaas-vpnaas-splitout

If you are interested in helping the efforts, sign up on the etherpad
or contact me.

I would like to release the initial release which is compatible with
the current horizon
FWaaS/VPNaaS dashboard (with no new features).
I hope we can release it around R-8 week (Jul 3) or R-7 (Jul 10).

It also will be good examples for neutron stadium/related projects
which are interested in
adding dashboard support. AFAIK, networking-sfc, tap-as-a-service are
interested in it.

Thanks,
Akihiro

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev









































__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][horizon][fwaas][vpnaas] fwaas/vpnaas dashboard split out

2017-06-21 Thread Kevin Benton
Some context here:
http://lists.openstack.org/pipermail/openstack-dev/2017-April/115200.html

On Wed, Jun 21, 2017 at 2:33 AM, Thomas Morin 
wrote:

> Hi Akihiro,
>
> While I understand the motivation to move these dashboards from
> openstack/horizon, what is the reason to prefer a distinct repo for the
> dashboard rather than hosting it in the main repo of these projects ?
>
> (networking-bgpvpn has had a dashboard for some time already, it is hosted
> under networking-bgpvpn/bgpvpn_dashboard and we haven't heard about any
> drawback)
>
> Thanks,
>
> -Thomas
>
>
>
> Akihiro Motoki :
>
> Hi neutron and horizon teams (especially fwaas and vpnaas folks),
>
> As we discussed so far, I prepared separate git repositories for FWaaS
> and VPNaaS 
> dashboards.http://git.openstack.org/cgit/openstack/neutron-fwaas-dashboard/http://git.openstack.org/cgit/openstack/neutron-vpnaas-dashboard/
>
> All new features will be implemented in the new repositories, for
> example, FWaaS v2 support.
> The initial core members consist of neutron-fwaas/vpnaas-core
> (respectively) + horizon-core.
>
> There are several things to do to complete the split out.
> I gathered a list of work items at the etherpad and we will track the
> progress here.https://etherpad.openstack.org/p/horizon-fwaas-vpnaas-splitout
> If you are interested in helping the efforts, sign up on the etherpad
> or contact me.
>
> I would like to release the initial release which is compatible with
> the current horizon
> FWaaS/VPNaaS dashboard (with no new features).
> I hope we can release it around R-8 week (Jul 3) or R-7 (Jul 10).
>
> It also will be good examples for neutron stadium/related projects
> which are interested in
> adding dashboard support. AFAIK, networking-sfc, tap-as-a-service are
> interested in it.
>
> Thanks,
> Akihiro
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][horizon][fwaas][vpnaas] fwaas/vpnaas dashboard split out

2017-06-21 Thread Thomas Morin

Hi Akihiro,

While I understand the motivation to move these dashboards from 
openstack/horizon, what is the reason to prefer a distinct repo for the 
dashboard rather than hosting it in the main repo of these projects ?


(networking-bgpvpn has had a dashboard for some time already, it is 
hosted under networking-bgpvpn/bgpvpn_dashboard and we haven't heard 
about any drawback)


Thanks,

-Thomas



Akihiro Motoki :

Hi neutron and horizon teams (especially fwaas and vpnaas folks),

As we discussed so far, I prepared separate git repositories for FWaaS
and VPNaaS dashboards.
http://git.openstack.org/cgit/openstack/neutron-fwaas-dashboard/
http://git.openstack.org/cgit/openstack/neutron-vpnaas-dashboard/

All new features will be implemented in the new repositories, for
example, FWaaS v2 support.
The initial core members consist of neutron-fwaas/vpnaas-core
(respectively) + horizon-core.

There are several things to do to complete the split out.
I gathered a list of work items at the etherpad and we will track the
progress here.
https://etherpad.openstack.org/p/horizon-fwaas-vpnaas-splitout
If you are interested in helping the efforts, sign up on the etherpad
or contact me.

I would like to release the initial release which is compatible with
the current horizon
FWaaS/VPNaaS dashboard (with no new features).
I hope we can release it around R-8 week (Jul 3) or R-7 (Jul 10).

It also will be good examples for neutron stadium/related projects
which are interested in
adding dashboard support. AFAIK, networking-sfc, tap-as-a-service are
interested in it.

Thanks,
Akihiro

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







































__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Do we still support core plugin not based on the ML2 framework?

2017-06-21 Thread Kevin Benton
Why not just delete the service plugins you don't support from the default
plugins dict?

On Wed, Jun 21, 2017 at 1:45 AM, Édouard Thuleau 
wrote:

> Ok, we would like to help on that. How we can start?
>
> I think the issue I raise in that thread must be the first point to
> address and my second proposition seems to be the correct one. What do
> you think?
> But it will needs some time and not sure we'll be able to fix all
> service plugins loaded by default before the next Pike release.
>
> I like to propose a workaround until all default service plugins will
> be compatible with non-DB core plugins. We can continue to load that
> default service plugins list but authorizing a core plugin to disable
> it completely with a private attribut on the core plugin class like
> it's done for bulk/pagination/sorting operations.
>
> Of course, we need to add the ability to report any regression on
> that. I think unit tests will help and we can also work on a
> functional test based on a fake non-DB core plugin.
>
> Regards,
> Édouard.
>
> On Tue, Jun 20, 2017 at 12:09 AM, Kevin Benton  wrote:
> > The issue is mainly developer resources. Everyone currently working
> upstream
> > doesn't have the bandwidth to keep adding/reviewing the layers of
> interfaces
> > to make the DB optional that go untested. (None of the projects that
> would
> > use them run a CI system that reports results on Neutron patches.)
> >
> > I think we can certainly accept patches to do the things you are
> proposing,
> > but there is no guarantee that it won't regress to being DB-dependent
> until
> > there is something reporting results back telling us when it breaks.
> >
> > So it's not that the community is against non-DB core plugins, it's just
> > that the people developing those plugins don't participate in the
> community
> > to ensure they work.
> >
> > Cheers
> >
> >
> > On Mon, Jun 19, 2017 at 2:15 AM, Édouard Thuleau <
> edouard.thul...@gmail.com>
> > wrote:
> >>
> >> Oops, sent too fast, sorry. I try again.
> >>
> >> Hi,
> >>
> >> Since Mitaka release, a default service plugins list is loaded when
> >> Neutron
> >> server starts [1]. That list is not editable and was extended with few
> >> services
> >> [2]. But all of them rely on the Neutron DB model.
> >>
> >> If a core driver is not based on the ML2 core plugin framework or not
> >> based on
> >> the 'neutron.db.models_v2' class, all that service plugins will not
> work.
> >>
> >> So my first question is Does Neutron still support core plugin not based
> >> on ML2
> >> or 'neutron.db.models_v2' class?
> >>
> >> If yes, I would like to propose two solutions:
> >> - permits core plugin to overload the service plugin class by it's own
> >> implementation and continuing to use the actual Neutron db based
> services
> >> as
> >> default.
> >> - modifying all default plugin service to use service plugin driver
> >> framework [3], and set the actual Neutron db based implementation as
> >> default driver for services. That permits to core drivers not based on
> the
> >> Neutron DB to specify a driver. We can see that solution was adopted in
> >> the
> >> networking-bgpvpn project, where can find two abstract driver classes,
> one
> >> for
> >> core driver based on Neutron DB model [4] and one used by core driver
> not
> >> based
> >> on the DB [5] as the Contrail driver [6].
> >>
> >> [1]
> >> https://github.com/openstack/neutron/commit/
> aadf2f30f84dff3d85f380a7ff4e16dbbb0c6bb0#diff-
> 9169a6595980d19b2649d5bedfff05ce
> >> [2]
> >> https://github.com/openstack/neutron/blob/master/neutron/
> plugins/common/constants.py#L43
> >> [3]
> >> https://github.com/openstack/neutron/blob/master/neutron/
> services/service_base.py#L27
> >> [4]
> >> https://github.com/openstack/networking-bgpvpn/blob/master/
> networking_bgpvpn/neutron/services/service_drivers/driver_api.py#L226
> >> [5]
> >> https://github.com/openstack/networking-bgpvpn/blob/master/
> networking_bgpvpn/neutron/services/service_drivers/driver_api.py#L23
> >> [6]
> >> https://github.com/Juniper/contrail-neutron-plugin/blob/
> master/neutron_plugin_contrail/plugins/opencontrail/
> networking_bgpvpn/contrail.py#L36
> >>
> >> Regards,
> >> Édouard.
> >>
> >> On Mon, Jun 19, 2017 at 10:47 AM, Édouard Thuleau
> >>  wrote:
> >> > Hi,
> >> > Since Mitaka release [1], a default service plugins list is loaded
> >> > when Neutron server starts. That list is not editable and was extended
> >> > with few services [2]. But none of th
> >> >
> >> > [1]
> >> > https://github.com/openstack/neutron/commit/
> aadf2f30f84dff3d85f380a7ff4e16dbbb0c6bb0#diff-
> 9169a6595980d19b2649d5bedfff05ce
> >> > [2]
> >> > https://github.com/openstack/neutron/blob/master/neutron/
> plugins/common/constants.py#L43
> >>
> >> 
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: 

[openstack-dev] [monasca] New time and place of Monasca Team Meeting

2017-06-21 Thread witold.be...@est.fujitsu.com
Hello,

this is just a reminder of the new place and time of the Monasca Team Meeting. 
It takes place

weekly on Wednesday at 1400 UTC in #openstack-meeting


See you there
Witek

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Do we still support core plugin not based on the ML2 framework?

2017-06-21 Thread joehuang
Hi,

Tricricle is based on core plugin interface, so if you want to refactory the 
interface, let us know
whether it'll break Tricricle. And I don't know whether there other plugins are 
using
this interface.

And there is one conclusion in this 
document(https://github.com/openstack/neutron-specs/blob/master/specs/newton/neutron-stadium.rst):

To provide composable networking solutions: the ML2/Service plugin framework 
was introduced many cycles ago to enable users with freedom of choice. Many 
solutions have switched to using ML2/Service plugins for high level services 
over the years. Although some plugins still use the core plugin interface to 
provide end-to-end solutions, the criterion to enforce the adoption of ML2 and 
service plugins for Neutron Stadium projects does not invalidate, nor does make 
monolithic solutions deprecated. It is simply a reflection of the fact that the 
Neutron team stands behind composability as one of the promise of open 
networking solutions. During code review the Neutron team will continue to 
ensure that changes and design implications do not have a negative impact on 
out of tree code irrespective of whether it is part of the Stadium project or 
not.

Best Regards
Chaoyi Huang (joehuang)


From: Édouard Thuleau [edouard.thul...@gmail.com]
Sent: 21 June 2017 16:45
To: OpenStack Development Mailing List (not for usage questions); 
ke...@benton.pub
Cc: Sachin Bansal
Subject: Re: [openstack-dev] [neutron] Do we still support core plugin not 
based on the ML2 framework?

Ok, we would like to help on that. How we can start?

I think the issue I raise in that thread must be the first point to
address and my second proposition seems to be the correct one. What do
you think?
But it will needs some time and not sure we'll be able to fix all
service plugins loaded by default before the next Pike release.

I like to propose a workaround until all default service plugins will
be compatible with non-DB core plugins. We can continue to load that
default service plugins list but authorizing a core plugin to disable
it completely with a private attribut on the core plugin class like
it's done for bulk/pagination/sorting operations.

Of course, we need to add the ability to report any regression on
that. I think unit tests will help and we can also work on a
functional test based on a fake non-DB core plugin.

Regards,
Édouard.

On Tue, Jun 20, 2017 at 12:09 AM, Kevin Benton  wrote:
> The issue is mainly developer resources. Everyone currently working upstream
> doesn't have the bandwidth to keep adding/reviewing the layers of interfaces
> to make the DB optional that go untested. (None of the projects that would
> use them run a CI system that reports results on Neutron patches.)
>
> I think we can certainly accept patches to do the things you are proposing,
> but there is no guarantee that it won't regress to being DB-dependent until
> there is something reporting results back telling us when it breaks.
>
> So it's not that the community is against non-DB core plugins, it's just
> that the people developing those plugins don't participate in the community
> to ensure they work.
>
> Cheers
>
>
> On Mon, Jun 19, 2017 at 2:15 AM, Édouard Thuleau 
> wrote:
>>
>> Oops, sent too fast, sorry. I try again.
>>
>> Hi,
>>
>> Since Mitaka release, a default service plugins list is loaded when
>> Neutron
>> server starts [1]. That list is not editable and was extended with few
>> services
>> [2]. But all of them rely on the Neutron DB model.
>>
>> If a core driver is not based on the ML2 core plugin framework or not
>> based on
>> the 'neutron.db.models_v2' class, all that service plugins will not work.
>>
>> So my first question is Does Neutron still support core plugin not based
>> on ML2
>> or 'neutron.db.models_v2' class?
>>
>> If yes, I would like to propose two solutions:
>> - permits core plugin to overload the service plugin class by it's own
>> implementation and continuing to use the actual Neutron db based services
>> as
>> default.
>> - modifying all default plugin service to use service plugin driver
>> framework [3], and set the actual Neutron db based implementation as
>> default driver for services. That permits to core drivers not based on the
>> Neutron DB to specify a driver. We can see that solution was adopted in
>> the
>> networking-bgpvpn project, where can find two abstract driver classes, one
>> for
>> core driver based on Neutron DB model [4] and one used by core driver not
>> based
>> on the DB [5] as the Contrail driver [6].
>>
>> [1]
>> https://github.com/openstack/neutron/commit/aadf2f30f84dff3d85f380a7ff4e16dbbb0c6bb0#diff-9169a6595980d19b2649d5bedfff05ce
>> [2]
>> https://github.com/openstack/neutron/blob/master/neutron/plugins/common/constants.py#L43
>> [3]
>> https://github.com/openstack/neutron/blob/master/neutron/services/service_base.py#L27
>> [4]
>> 

Re: [openstack-dev] [Keystone][Mistral][Devstack] Confusion between auth_url and auth_uri in keystone middleware

2017-06-21 Thread Mikhail Fedosin
Thanks for your help folks!

I proposed a patch for mistral and it seems it works now
https://review.openstack.org/#/c/473796
I'm not a great expert on this issue, so it will be great if someone from
keystone team could review the patch.

Best,
Mike

On Wed, Jun 21, 2017 at 4:15 AM, Jamie Lennox  wrote:

>
>
> On 16 June 2017 at 00:44, Mikhail Fedosin  wrote:
>
>> Thanks György!
>>
>> On Thu, Jun 15, 2017 at 1:55 PM, Gyorgy Szombathelyi <
>> gyorgy.szombathe...@doclerholding.com> wrote:
>>
>>> Hi Mikhail,
>>>
>>> (I'm not from the Keystone team, but did some patches for using
>>> keystonauth1).
>>>
>>> >
>>> > 2. Even if auth_url is set, it can't be used later, because it is not
>>> registered in
>>> > oslo_config [5]
>>>
>>> auth_url is actually a dynamic parameter and depends on the keystone
>>> auth plugin used
>>> (auth_type=xxx). The plugin which needs this parameter, registers it.
>>>
>>
>> Based on this http://paste.openstack.org/show/612664/ I would say that
>> the plugin doesn't register it :(
>> It either can be a bug, or it was done intentionally, I don't know.
>>
>>
>>>
>>> >
>>> > So I would like to get an advise from keystone team and understand
>>> what I
>>> > should do in such cases. Official documentation doesn't add clarity on
>>> the
>>> > matter because it recommends to use auth_uri in some cases and
>>> auth_url in
>>> > others.
>>> > My suggestion is to add auth_url in the list of keystone authtoken
>>> > middleware config options, so that the parameter can be used by the
>>> others.
>>>
>>> Yepp, this makes some confusion, but adding auth_url will make a clash
>>> with
>>> most (all?) authentication plugins. auth_url can be considered as an
>>> 'internal'
>>> option for the keystoneauth1 modules, and not used by anything else (like
>>> the keystonemiddleware itself). However if there would be a more elagant
>>> solution, I would also hear about it.
>>>
>>> >
>>> > Best,
>>> > Mike
>>> >
>>> Br,
>>> György
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> My final thought that we have to use both (auth_url and auth_uri) options
>> in mistral config, which looks ugly, but necessary.
>>
>> Best,
>> Mike
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> Hi,
>
> I feel like the question has been answered in the thread, but as i'm
> largely responsible for this I thought i'd pipe up here.
>
> It's annoying and unfortunate that auth_uri and auth_url look so similar.
> They've actually existed for some time side by side and ended up like that
> out of evolution rather that any thought. Interestingly the first result
> for auth_uri in google is [1]. I'd be happy to rename it for something else
> if we can agree on what.
>
> Regarding your paste (and the reason i popped up), i would consider this a
> bug in mistral. The auth options aren't registered into oslo.config until
> just before the plugin is loaded because depending on what you put in for
> auth_type the options may be different. In practice pretty much every
> plugin has an auth_url, but mistral shouldn't be assuming anything about
> the structure of [keystone_authtoken]. That's the sole responsibility of
> keystonemiddleware and it does change over time.
>
> Jamie
>
>
> [1] https://adam.younglogic.com/2016/06/auth_uri-vs-auth_url/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-21 Thread Thierry Carrez
Zane Bitter wrote:
> [...]
> Until then it seems to me that the tradeoff is between decoupling it
> from the particular cloud it's running on so that users can optionally
> deploy it standalone (essentially Vish's proposed solution for the *aaS
> services from many moons ago) vs. decoupling it from OpenStack in
> general so that the operator has more flexibility in how to deploy.
> 
> I'd love to be able to cover both - from a user using it standalone to
> spin up and manage a DB in containers on a shared PaaS, through to a
> user accessing it as a service to provide a DB running on a dedicated VM
> or bare metal server, and everything in between. I don't know is such a
> thing is feasible. I suspect we're going to have to talk a lot about VMs
> and network plumbing and volume storage :)

As another data point, we are seeing this very same tradeoff with Magnum
vs. Tessmaster (with "I want to get a Kubernetes cluster" rather than "I
want to get a database").

Tessmaster is the user-side tool from EBay deploying Kubernetes on
different underlying cloud infrastructures: takes a bunch of cloud
credentials, then deploys, grows and shrinks Kubernetes cluster for you.

Magnum is the infrastructure-side tool from OpenStack giving you
COE-as-a-service, through a provisioning API.

Jay is advocating for Trove to be more like Tessmaster, and less like
Magnum. I think I agree with Zane that those are two different approaches:

From a public cloud provider perspective serving lots of small users, I
think a provisioning API makes sense. The user in that case is in a
"black box" approach, so I think the resulting resources should not
really be accessible as VMs by the tenant, even if they end up being
Nova VMs. The provisioning API could propose several options (K8s or
Mesos, MySQL or PostgreSQL).

From a private cloud / hybrid cloud / large cloud user perspective, the
user-side deployment tool, letting you deploy the software on various
types of infrastructure, probably makes more sense. It's probably more
work to run it, but you gain in flexibility. That user-side tool would
probably not support multiple options, but be application-specific.

So yes, ideally we would cover both. Because they target different
users, and both are right...

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Do we still support core plugin not based on the ML2 framework?

2017-06-21 Thread Édouard Thuleau
Ok, we would like to help on that. How we can start?

I think the issue I raise in that thread must be the first point to
address and my second proposition seems to be the correct one. What do
you think?
But it will needs some time and not sure we'll be able to fix all
service plugins loaded by default before the next Pike release.

I like to propose a workaround until all default service plugins will
be compatible with non-DB core plugins. We can continue to load that
default service plugins list but authorizing a core plugin to disable
it completely with a private attribut on the core plugin class like
it's done for bulk/pagination/sorting operations.

Of course, we need to add the ability to report any regression on
that. I think unit tests will help and we can also work on a
functional test based on a fake non-DB core plugin.

Regards,
Édouard.

On Tue, Jun 20, 2017 at 12:09 AM, Kevin Benton  wrote:
> The issue is mainly developer resources. Everyone currently working upstream
> doesn't have the bandwidth to keep adding/reviewing the layers of interfaces
> to make the DB optional that go untested. (None of the projects that would
> use them run a CI system that reports results on Neutron patches.)
>
> I think we can certainly accept patches to do the things you are proposing,
> but there is no guarantee that it won't regress to being DB-dependent until
> there is something reporting results back telling us when it breaks.
>
> So it's not that the community is against non-DB core plugins, it's just
> that the people developing those plugins don't participate in the community
> to ensure they work.
>
> Cheers
>
>
> On Mon, Jun 19, 2017 at 2:15 AM, Édouard Thuleau 
> wrote:
>>
>> Oops, sent too fast, sorry. I try again.
>>
>> Hi,
>>
>> Since Mitaka release, a default service plugins list is loaded when
>> Neutron
>> server starts [1]. That list is not editable and was extended with few
>> services
>> [2]. But all of them rely on the Neutron DB model.
>>
>> If a core driver is not based on the ML2 core plugin framework or not
>> based on
>> the 'neutron.db.models_v2' class, all that service plugins will not work.
>>
>> So my first question is Does Neutron still support core plugin not based
>> on ML2
>> or 'neutron.db.models_v2' class?
>>
>> If yes, I would like to propose two solutions:
>> - permits core plugin to overload the service plugin class by it's own
>> implementation and continuing to use the actual Neutron db based services
>> as
>> default.
>> - modifying all default plugin service to use service plugin driver
>> framework [3], and set the actual Neutron db based implementation as
>> default driver for services. That permits to core drivers not based on the
>> Neutron DB to specify a driver. We can see that solution was adopted in
>> the
>> networking-bgpvpn project, where can find two abstract driver classes, one
>> for
>> core driver based on Neutron DB model [4] and one used by core driver not
>> based
>> on the DB [5] as the Contrail driver [6].
>>
>> [1]
>> https://github.com/openstack/neutron/commit/aadf2f30f84dff3d85f380a7ff4e16dbbb0c6bb0#diff-9169a6595980d19b2649d5bedfff05ce
>> [2]
>> https://github.com/openstack/neutron/blob/master/neutron/plugins/common/constants.py#L43
>> [3]
>> https://github.com/openstack/neutron/blob/master/neutron/services/service_base.py#L27
>> [4]
>> https://github.com/openstack/networking-bgpvpn/blob/master/networking_bgpvpn/neutron/services/service_drivers/driver_api.py#L226
>> [5]
>> https://github.com/openstack/networking-bgpvpn/blob/master/networking_bgpvpn/neutron/services/service_drivers/driver_api.py#L23
>> [6]
>> https://github.com/Juniper/contrail-neutron-plugin/blob/master/neutron_plugin_contrail/plugins/opencontrail/networking_bgpvpn/contrail.py#L36
>>
>> Regards,
>> Édouard.
>>
>> On Mon, Jun 19, 2017 at 10:47 AM, Édouard Thuleau
>>  wrote:
>> > Hi,
>> > Since Mitaka release [1], a default service plugins list is loaded
>> > when Neutron server starts. That list is not editable and was extended
>> > with few services [2]. But none of th
>> >
>> > [1]
>> > https://github.com/openstack/neutron/commit/aadf2f30f84dff3d85f380a7ff4e16dbbb0c6bb0#diff-9169a6595980d19b2649d5bedfff05ce
>> > [2]
>> > https://github.com/openstack/neutron/blob/master/neutron/plugins/common/constants.py#L43
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


Re: [openstack-dev] [nova][scheduler][placement] Trying to understand the proposed direction

2017-06-21 Thread sfinucan
On Tue, 2017-06-20 at 16:48 -0600, Chris Friesen wrote:
> On 06/20/2017 09:51 AM, Eric Fried wrote:
> > Nice Stephen!
> > 
> > For those who aren't aware, the rendered version (pretty, so pretty) can
> > be accessed via the gate-nova-docs-ubuntu-xenial jenkins job:
> > 
> > http://docs-draft.openstack.org/10/475810/1/check/gate-nova-docs-ubuntu-xen
> > ial/25e5173//doc/build/html/scheduling.html?highlight=scheduling
> 
> Can we teach it to not put line breaks in the middle of words in the text
> boxes?

Doesn't seem configurable in its current form :( This, and the defaulting to
PNG output instead of SVG (which makes things ungreppable) are my biggest bug
bear.

I'll go have a look at the sauce and see what can be done about it. If not,
still better than nothing?

Stephen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] realtime kvm cpu affinities

2017-06-21 Thread Henning Schild
Am Tue, 20 Jun 2017 10:41:44 -0600
schrieb Chris Friesen :

> On 06/20/2017 01:48 AM, Henning Schild wrote:
> > Hi,
> >
> > We are using OpenStack for managing realtime guests. We modified
> > it and contributed to discussions on how to model the realtime
> > feature. More recent versions of OpenStack have support for
> > realtime, and there are a few proposals on how to improve that
> > further.
> >
> > But there is still no full answer on how to distribute threads
> > across host-cores. The vcpus are easy but for the emulation and
> > io-threads there are multiple options. I would like to collect the
> > constraints from a qemu/kvm perspective first, and than possibly
> > influence the OpenStack development
> >
> > I will put the summary/questions first, the text below provides more
> > context to where the questions come from.
> > - How do you distribute your threads when reaching the really low
> >cyclictest results in the guests? In [3] Rik talked about
> > problems like hold holder preemption, starvation etc. but not
> > where/how to schedule emulators and io
> > - Is it ok to put a vcpu and emulator thread on the same core as
> > long as the guest knows about it? Any funny behaving guest, not
> > just Linux.
> > - Is it ok to make the emulators potentially slow by running them on
> >busy best-effort cores, or will they quickly be on the critical
> > path if you do more than just cyclictest? - our experience says we
> > don't need them reactive even with rt-networking involved
> >
> >
> > Our goal is to reach a high packing density of realtime VMs. Our
> > pragmatic first choice was to run all non-vcpu-threads on a shared
> > set of pcpus where we also run best-effort VMs and host load.
> > Now the OpenStack guys are not too happy with that because that is
> > load outside the assigned resources, which leads to quota and
> > accounting problems.  
> 
> If you wanted to go this route, you could just edit the
> "vcpu_pin_set" entry in nova.conf on the compute nodes so that nova
> doesn't actually know about all of the host vCPUs.  Then you could
> run host load and emulator threads on the pCPUs that nova doesn't
> know about, and there will be no quota/accounting issues in nova.

Exactly that is the idea but OpenStack currently does not allow that.
No thread will ever end up on a core outside the vcpu_pin_set and
emulator/io-threads are controlled by OpenStack/libvirt. And you need a
way to specify exactly which cores outside vcpu_pin_set are allowed for
breaking out of that set.
On our compute nodes we also have cores for host-realtime tasks i.e.
dpdk-based rt-networking.

Henning

> Chris
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-21 Thread joehuang
Hi,

If we just want to replace "bigtent" concept to another concept which mentioned 
in this thread,
many of them gave me the impression that there are still some projects more
important than others, so that's why I suggest to use flat project list, and 
put stress
"OPEN" stack here.

Best Regards
Chaoyi Huang (joehuang)


From: Flavio Percoco [fla...@redhat.com]
Sent: 21 June 2017 15:44
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][tc] Moving away from "bigtent"   
terminology

On 21/06/17 06:18 +, joehuang wrote:
>hello, Flavio,

Hi :D

>This thread is to discuss moving away from the "big tent" term, not removing 
>some project.
>Removing a project will make this flavor disappear from the ice-cream counter, 
>but this thread,
>it's to use another concept to describe projects under openstack project 
>governance.
>If we don't want to use "big tent" for those projects staying in the counter,
>I hope all projects could be treated in flat, just like different flavor 
>ice-creams are flat in the
>same counter, kid can make choice by themselves.
>
>Even Nova may be only "core"  to some cloud operators, but not always for all 
>cloud operators,
>for example, those who only run object storage service, hyper.sh also not use 
>Nova,  some day may
>some cloud operators only use Zun or K8S instead for computing, it should not 
>be an issue
>to OpenStack community.

I think you misunderstood my message. I'm not talking about removing projects,
I'm talking about the staging of these projects to join the "Big tent" -
regardless of how we call it. The distinction *is* important and we ought to
find a way to preserve it and communicate it so that there's the least amount of
confusion possible.


>OpenStack should be "OPEN" stack for infrastructure, just like kid can choose 
>how many
>balls of ice-cream, cloud operators can make decision to choose which project 
>to use or
>not to manage his infrastructure.

You keep mentioning "OPEN stack" as if we weren't being open (enough?) and I
think I'm failing to see why you think that. Could you please elaborate more?
What you're describing seems to be the current status.

Flavio

>Best Regards
>Chaoyi Huang (joehuang)
>
>
>From: Flavio Percoco [fla...@redhat.com]
>Sent: 20 June 2017 17:44
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [all][tc] Moving away from "bigtent"   
>terminology
>
>On 20/06/17 00:33 +, joehuang wrote:
>>I think openstack community  provides a flat project market place for 
>>infrastructure is good enough:
>>
>>all projects are just some "goods" in the market place, let the cloud 
>>operators to select projects
>>from the project market place for his own infrastructure.
>>
>>We don't have to mark a project a core project or not, only need to tag 
>>attribute of a project, for
>>example how mature it is, how many "like" they have, what the cloud operator 
>>said for the project. etc.
>>
>>All flat, just let people make decision by themselves, they are not idiot, 
>>they have wisdom
>>on building infrastructure.
>>
>>Not all people need a package: you bought a package of ice-cream, but not all 
>>you will like it,
>>If they want package, distribution provider can help them to define and 
>>customize a package, if
>>you want customization, you will decide which ball of cream you want, isn't 
>>it?
>
>The flavors you see in a ice-creem shop counter are not there by accident. 
>Those
>flavors have gone through a creation process, they have been tested and they
>have also survived over the years. Some flavors are removed with time and some
>others stay there forever.
>
>Unfortunately, tagging those flavors won't cut it, which is why you don't see
>tags in their labels when you go to an ice-cream shop. Some tags are implied,
>other tags are inferred and other tags are subjective.
>
>Experimenting with new flavors doesn't happen overnight in some person's
>bedroom. The new flavors are tested using the *same* infrastructure as the 
>other
>flavors and once they reach a level of maturity, they are exposed in the 
>counter
>so that customers will able to consume them.
>
>Ultimately, experimentation is part of the ice-cream shop's mission and it
>requires time, effort and resources but not all experiments end well. At the
>end, though, what really matters is that all these flavors serve the same
>mission and that's why they are sold at the ice-cream shop, that's why they are
>exposed in the counter. Customer's of the ice-cream shop know they can trust
>what's in the counter. They know the exposed flavors serve their needs at a 
>high
>level and they can now focus on their specific needs.
>
>So, do you really think it's just a set of flavors and it doesn't really matter
>how those flavors got there?
>
>Flavio
>
>--
>@flaper87
>Flavio Percoco
>

[openstack-dev] [publiccloud-wg] Reminder meeting PublicCloudWorkingGroup

2017-06-21 Thread Tobias Rydberg

Hi everyone,

Don't forget todays meeting for the PublicCloudWorkingGroup.
1400 UTC in IRC channel #openstack-meeting-3

Etherpad: https://etherpad.openstack.org/p/publiccloud-wg

Goals: https://etherpad.openstack.org/p/SYDNEY_GOALS_publiccloud-wg

Talk to you this afternoon!

Tobias
tob...@citynetwork.se


smime.p7s
Description: S/MIME Cryptographic Signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-21 Thread Flavio Percoco

On 21/06/17 06:18 +, joehuang wrote:

hello, Flavio,


Hi :D


This thread is to discuss moving away from the "big tent" term, not removing 
some project.
Removing a project will make this flavor disappear from the ice-cream counter, 
but this thread,
it's to use another concept to describe projects under openstack project 
governance.
If we don't want to use "big tent" for those projects staying in the counter,
I hope all projects could be treated in flat, just like different flavor 
ice-creams are flat in the
same counter, kid can make choice by themselves.

Even Nova may be only "core"  to some cloud operators, but not always for all 
cloud operators,
for example, those who only run object storage service, hyper.sh also not use 
Nova,  some day may
some cloud operators only use Zun or K8S instead for computing, it should not 
be an issue
to OpenStack community.


I think you misunderstood my message. I'm not talking about removing projects,
I'm talking about the staging of these projects to join the "Big tent" -
regardless of how we call it. The distinction *is* important and we ought to
find a way to preserve it and communicate it so that there's the least amount of
confusion possible.



OpenStack should be "OPEN" stack for infrastructure, just like kid can choose 
how many
balls of ice-cream, cloud operators can make decision to choose which project 
to use or
not to manage his infrastructure.


You keep mentioning "OPEN stack" as if we weren't being open (enough?) and I
think I'm failing to see why you think that. Could you please elaborate more?
What you're describing seems to be the current status.

Flavio


Best Regards
Chaoyi Huang (joehuang)


From: Flavio Percoco [fla...@redhat.com]
Sent: 20 June 2017 17:44
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][tc] Moving away from "bigtent"   
terminology

On 20/06/17 00:33 +, joehuang wrote:

I think openstack community  provides a flat project market place for 
infrastructure is good enough:

all projects are just some "goods" in the market place, let the cloud operators 
to select projects
from the project market place for his own infrastructure.

We don't have to mark a project a core project or not, only need to tag 
attribute of a project, for
example how mature it is, how many "like" they have, what the cloud operator 
said for the project. etc.

All flat, just let people make decision by themselves, they are not idiot, they 
have wisdom
on building infrastructure.

Not all people need a package: you bought a package of ice-cream, but not all 
you will like it,
If they want package, distribution provider can help them to define and 
customize a package, if
you want customization, you will decide which ball of cream you want, isn't it?


The flavors you see in a ice-creem shop counter are not there by accident. Those
flavors have gone through a creation process, they have been tested and they
have also survived over the years. Some flavors are removed with time and some
others stay there forever.

Unfortunately, tagging those flavors won't cut it, which is why you don't see
tags in their labels when you go to an ice-cream shop. Some tags are implied,
other tags are inferred and other tags are subjective.

Experimenting with new flavors doesn't happen overnight in some person's
bedroom. The new flavors are tested using the *same* infrastructure as the other
flavors and once they reach a level of maturity, they are exposed in the counter
so that customers will able to consume them.

Ultimately, experimentation is part of the ice-cream shop's mission and it
requires time, effort and resources but not all experiments end well. At the
end, though, what really matters is that all these flavors serve the same
mission and that's why they are sold at the ice-cream shop, that's why they are
exposed in the counter. Customer's of the ice-cream shop know they can trust
what's in the counter. They know the exposed flavors serve their needs at a high
level and they can now focus on their specific needs.

So, do you really think it's just a set of flavors and it doesn't really matter
how those flavors got there?

Flavio

--
@flaper87
Flavio Percoco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Deprecate/Remove deferred_auth_method=password config option

2017-06-21 Thread Steven Hardy
On Fri, Jun 16, 2017 at 10:09 AM, Kaz Shinohara  wrote:
> Hi Rabi,
>
>
> I still takes `deferred _auth_method=password` behalf of trusts because we
> don't enable trusts in the Keystone side due to some internal reason.
> The issues what you pointed are correct(e.g. user_domain_id), we don't use
> the domain well and also added some patches to skip those issues.
> But I guess that the majority of heat users already moved to trusts and it
> is obviously better solution in terms of security and granular role control.
> As the edge case(perhaps), if a user want to take password auth, it would be
> too tricky for them to introduce it, therefore I agree your 2nd option.
>
> If we will remove the `deferred_auth_method=password` from heat.conf,
> should we keep `deferred_auth_method` self or will replace it to a new
> config option just to specify the trusts enable/disable ?  Do you have any
> idea on this?

I don't think it makes sense to have an enable/disable trusts config
option unless there is an alternative (e.g we've discussed oauth in
the past and in future there may be alternatives to trusts).

I guess if there was sufficient interest we could have some option
that blacklists all resources that require deferred authentication,
but I'm not sure folks are actually asking for that right now?

My preference is to deprecate deferred_auth_method, since right now
there's not really any alternative that works for us.

> Also I'm thinking that `reauthentication_method` also might be
> changed/merged ?

No, I think we still need this, because it is disabled by default -
this option allows you to enable defeating token expiry via trusts,
which is something an operator must opt-in to IMO (we should not
enable this by default, as it's really only intended for certain edge
cases such as TripleO where there are often very long running stack
operations that may exceed the keystone token expiry).

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] diskimage builder works for trusty but not for xenial

2017-06-21 Thread Paul Belanger
On Wed, Jun 21, 2017 at 08:44:45AM +0200, Ignazio Cassano wrote:
> Hi all,
> today I am creating openstack images with disk-image-create.
> It works fine for centos and ubuntu trusty.
> With xenial I got the following error:
> 
> * Connection #0 to host cloud-images.ubuntu.com left intact
> Downloaded and cached
> http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-root.tar.gz,
> having forced upstream caches to revalidate
> xenial-server-cloudimg-amd64-root.tar.gz: FAILED
> sha256sum: WARNING: 1 computed checksum did NOT match
> 
> Are there any problems on http://cloud-images.ubuntu.com ?
> Regards
> Ignazio

I would suggest using ubuntu-minimal and centos-minimal elements for creating
images. This is what we do in openstack-infra today which means you'll build the
image directly from dpkg / rpm sources and not download existing images from
upstream.

This is the solve the issue which you are hitting if upstream images have been
removed / deleted / of failed to download.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] diskimage builder works for trusty but not for xenial

2017-06-21 Thread Ignazio Cassano
Hi all,
today I am creating openstack images with disk-image-create.
It works fine for centos and ubuntu trusty.
With xenial I got the following error:

* Connection #0 to host cloud-images.ubuntu.com left intact
Downloaded and cached
http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-root.tar.gz,
having forced upstream caches to revalidate
xenial-server-cloudimg-amd64-root.tar.gz: FAILED
sha256sum: WARNING: 1 computed checksum did NOT match

Are there any problems on http://cloud-images.ubuntu.com ?
Regards
Ignazio
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Deprecate/Remove deferred_auth_method=password config option

2017-06-21 Thread Rabi Mishra
On Fri, Jun 16, 2017 at 7:03 PM, Zane Bitter  wrote:
[snip]

>
> I'm not sure whether this works with keystone v2 and anyone is using
>> it or not. Keeping in mind that heat-cli is deprecated and keystone
>> v3 is now the default, we've 2 options
>>
>> 1. Continue to support 'deferred_auth_method=passsword' option and
>> fix all the above issues.
>>
>> 2. Remove/deprecate the option in pike itlsef.
>>
>> I would prefer option 2, but probably I miss some history and use
>> cases for it.
>>
>
> Am I right in thinking that any user (i.e. not just the [heat] service
> user) can create a trust? I still see occasional requests about 'standalone
> mode' for clouds that don't have Heat available to users (which I suspect
> is broken, otherwise people wouldn't be asking), and I'm guessing that
> standalone mode has heretofore required deferred_auth_method=password.
>

I think standalone heat is broken in more than one way based on my testing.
It seems changes have not kept up with heat standalone as 'authpassword'
middleware is broken[1] and we don't seem to pass correct domain details in
the rpc context.  I've tried to fix them in[2].

I'm also not sure why heat standalone historically restricts
deferred_auth_method to 'password'[3]. It seems to work well with 'trusts'
though.


[1]  https://bugs.launchpad.net/heat/+bug/1699418
[2]  https://review.openstack.org/#/c/476014/
[3]  https://github.com/openstack/heat/blob/master/devstack/lib/heat#L74


> So if we're going to remove the option then we should probably either
> officially disown standalone mode or rewrite the instructions such that it
> can be used with the trusts method.
>
> I think disowning the standalone mode would be an easier option. Probably
we should rewrite the instructions for it to be used with 'trusts' method
as it seems to work, unless I miss something. However, without any testing
at the gate we would surely break it from time to time.


> cheers,
> Zane.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Rabi Mishra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-21 Thread joehuang
hello, Flavio,

This thread is to discuss moving away from the "big tent" term, not removing 
some project.
Removing a project will make this flavor disappear from the ice-cream counter, 
but this thread,
it's to use another concept to describe projects under openstack project 
governance. 
If we don't want to use "big tent" for those projects staying in the counter, 
I hope all projects could be treated in flat, just like different flavor 
ice-creams are flat in the
same counter, kid can make choice by themselves. 

Even Nova may be only "core"  to some cloud operators, but not always for all 
cloud operators,
for example, those who only run object storage service, hyper.sh also not use 
Nova,  some day may
some cloud operators only use Zun or K8S instead for computing, it should not 
be an issue
to OpenStack community.

OpenStack should be "OPEN" stack for infrastructure, just like kid can choose 
how many
balls of ice-cream, cloud operators can make decision to choose which project 
to use or
not to manage his infrastructure.

Best Regards
Chaoyi Huang (joehuang)


From: Flavio Percoco [fla...@redhat.com]
Sent: 20 June 2017 17:44
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][tc] Moving away from "bigtent"   
terminology

On 20/06/17 00:33 +, joehuang wrote:
>I think openstack community  provides a flat project market place for 
>infrastructure is good enough:
>
>all projects are just some "goods" in the market place, let the cloud 
>operators to select projects
>from the project market place for his own infrastructure.
>
>We don't have to mark a project a core project or not, only need to tag 
>attribute of a project, for
>example how mature it is, how many "like" they have, what the cloud operator 
>said for the project. etc.
>
>All flat, just let people make decision by themselves, they are not idiot, 
>they have wisdom
>on building infrastructure.
>
>Not all people need a package: you bought a package of ice-cream, but not all 
>you will like it,
>If they want package, distribution provider can help them to define and 
>customize a package, if
>you want customization, you will decide which ball of cream you want, isn't it?

The flavors you see in a ice-creem shop counter are not there by accident. Those
flavors have gone through a creation process, they have been tested and they
have also survived over the years. Some flavors are removed with time and some
others stay there forever.

Unfortunately, tagging those flavors won't cut it, which is why you don't see
tags in their labels when you go to an ice-cream shop. Some tags are implied,
other tags are inferred and other tags are subjective.

Experimenting with new flavors doesn't happen overnight in some person's
bedroom. The new flavors are tested using the *same* infrastructure as the other
flavors and once they reach a level of maturity, they are exposed in the counter
so that customers will able to consume them.

Ultimately, experimentation is part of the ice-cream shop's mission and it
requires time, effort and resources but not all experiments end well. At the
end, though, what really matters is that all these flavors serve the same
mission and that's why they are sold at the ice-cream shop, that's why they are
exposed in the counter. Customer's of the ice-cream shop know they can trust
what's in the counter. They know the exposed flavors serve their needs at a high
level and they can now focus on their specific needs.

So, do you really think it's just a set of flavors and it doesn't really matter
how those flavors got there?

Flavio

--
@flaper87
Flavio Percoco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev