Re: [openstack-dev] [horizon][release] freeze timelines for horizon in newton

2016-05-01 Thread Rob Cresswell
Thanks Amrith. I'm referring to the release timeline when I said expectation; 
previously every plugin had been on an independent release, rather than being 
the same as Horizon. Several plugins have changed this without any 
communication; it would've been better to let us know at the time so we 
could've discussed earlier, or during the summit for example.

Rob

On 1 May 2016 11:26 p.m., "Amrith Kumar" 
> wrote:
From: Rob Cresswell (rcresswe) 
[mailto:rcres...@cisco.com]
Sent: Friday, April 29, 2016 4:28 PM
To: OpenStack Development Mailing List (not for usage questions) 
>
Subject: Re: [openstack-dev] [horizon][release] freeze timelines for horizon in 
newton

This has been discussed (just now) in the release management plan 
(https://etherpad.openstack.org/p/newton-relmgt-plan) See point 8 under 
Communication/Governance Changes. From an immediate standpoint, the RC phase of 
this cycle will be much stricter to prevent late breakages. Going forward, 
we’re likely going to establish an earlier feature freeze too, pending 
community discussion.
[amrith] Thanks Rob, I’ve updated the etherpad with a link to this mail thread.

On a separate note, this email prompted me to scan the governance for the 
dashboard plugins and it became apparent that several have changed their 
release tags, without informing Horizon of this release cadence expectation via 
IRC, email, or our plugin feedback fishbowl. If we are to continue building a 
good plugin ecosystem, the plugins *must* communicate their expectations to us 
upstream; we do not have the time to monitor every plugin.
[amrith] I’m unsure what this expectation is. I tried to google it but came up 
empty.

I assume that you are asking that we let you know when we release a new version 
of the dashboard, is that right? Or is it something else? In any event, would 
you provide me a link to something describing this expectation and I’ll make 
sure that we try and stay true to it.

Rob


On 29 Apr 2016, at 11:26, Amrith Kumar 
> wrote:

In the Trove review of the release schedule this morning, and in the 
retrospective of the mitaka release process, one question which was raised was 
the linkage between projects like Trove and Horizon.

This came up in the specific context of the fact that projects like Trove (in 
the form of the trove-dashboard repository) late in the Mitaka cycle[3]. Late 
in the Mitaka cycle, a change in Horizon caused Trove to break very close to 
the feature freeze date.

So the question is whether we can assume that projects like Horizon will freeze 
in R-6 to ensure that (for example) Trove will freeze in R-5.

Thanks,

-amrith

[1] http://releases.openstack.org/newton/schedule.html
[2] https://review.openstack.org/#/c/311123/
[3] https://review.openstack.org/#/c/307221/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle] Requirements for becoming approved official project

2016-05-01 Thread Zhipeng Huang
Thx shinobu,

This is a great checklist ! Re PTL election, could we use community tools
for that ?
On May 2, 2016 1:09 PM, "Shinobu Kinjo"  wrote:

Hi Team,

According to Documentation team, the Tricircle project seems to close
to be official (but not really official kind of) since we have
standard directory structure. [1]

What we need to do from here is to submit a project-config patch to
publish the documentation to [2]. Once we finish this stage, we would
have a guide underneath developer.

With this process, we're required to apply to the TC with adding your
repository to the governance repository. [3] [4] What TC will check
are described in [5].

As of now, a summary of remaining processes to join the OpenStack,

 1.Approval process by TC
 2.PTL election.

Probably still I've been missing something. If there, please point
them out to me.

BTW since we've been already done some bug fixes, it's worth noticing
and keeping it mind [6].

[1] https://git.openstack.org/cgit/openstack/tricircle/tree/
[2] http://docs.openstack.org/developer/tricircle
[3]
http://docs.openstack.org/project-team-guide/open-community.html#technical-committee-and-ptl-elections
[4] http://governance.openstack.org/
[5] http://governance.openstack.org/reference/new-projects-requirements.html
[6] http://governance.openstack.org/reference/project-testing-interface.html

Cheers,
Shinobu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kubernetes] One repo vs two

2016-05-01 Thread Swapnil Kulkarni
On Mon, May 2, 2016 at 9:54 AM, Britt Houser (bhouser)
 wrote:
> Although it seems I'm in the minority, I am in favor of unified repo.
>
> From: "Steven Dake (stdake)" 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: Sunday, May 1, 2016 at 5:03 PM
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Subject: [openstack-dev] [kolla][kubernetes] One repo vs two
>
> Ryan had rightly pointed out that when we made the original proposal 9am
> morning we had asked folks if they wanted to participate in a separate
> repository.
>
> I don't think a separate repository is the correct approach based upon one
> off private conversations with folks at summit.  Many people from that list
> approached me and indicated they would like to see the work integrated in
> one repository as outlined in my vote proposal email.  The reasons I heard
> were:
>
> Better integration of the community
> Better integration of the code base
> Doesn't present an us vs them mentality that one could argue happened during
> kolla-mesos
> A second repository makes k8s a second class citizen deployment architecture
> without a voice in the full deployment methodology
> Two gating methods versus one
> No going back to a unified repository while preserving git history
>
> I favor of the separate repositories I heard
>
> It presents a unified workspace for kubernetes alone
> Packaging without ansible is simpler as the ansible directory need not be
> deleted
>
> There were other complaints but not many pros.  Unfortunately I failed to
> communicate these complaints to the core team prior to the vote, so now is
> the time for fixing that.
>
> I'll leave it open to the new folks that want to do the work if they want to
> work on an offshoot repository and open us up to the possible problems
> above.
>
> If you are on this list:
>
> Ryan Hallisey
> Britt Houser
>
> mark casey
>
> Steven Dake (delta-alpha-kilo-echo)
>
> Michael Schmidt
>
> Marian Schwarz
>
> Andrew Battye
>
> Kevin Fox (kfox)
>
> Sidharth Surana (ssurana)
>
>  Michal Rostecki (mrostecki)
>
>   Swapnil Kulkarni (coolsvap)
>
>   MD NADEEM (mail2nadeem92)
>
>   Vikram Hosakote (vhosakot)
>
>   Jeff Peeler (jpeeler)
>
>   Martin Andre (mandre)
>
>   Ian Main (Slower)
>
> Hui Kang (huikang)
>
> Serguei Bezverkhi (sbezverk)
>
> Alex Polvi (polvi)
>
> Rob Mason
>
> Alicja Kwasniewska
>
> sean mooney (sean-k-mooney)
>
> Keith Byrne (kbyrne)
>
> Zdenek Janda (xdeu)
>
> Brandon Jozsa (v1k0d3n)
>
> Rajath Agasthya (rajathagasthya)
> Jinay Vora
> Hui Kang
> Davanum Srinivas
>
>
>
> Please speak up if you are in favor of a separate repository or a unified
> repository.
>
> The core reviewers will still take responsibility for determining if we
> proceed on the action of implementing kubernetes in general.
>
> Thank you
> -steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


I am in the favor of having two separate repos and evaluating the
merge/split option later.
Though in the longer run, I would recommend having a single repo with
multiple stable deployment tools(maybe too early to comment views but
yeah)

Swapnil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle] Requirements for becoming approved official project

2016-05-01 Thread Shinobu Kinjo
Hi Team,

According to Documentation team, the Tricircle project seems to close
to be official (but not really official kind of) since we have
standard directory structure. [1]

What we need to do from here is to submit a project-config patch to
publish the documentation to [2]. Once we finish this stage, we would
have a guide underneath developer.

With this process, we're required to apply to the TC with adding your
repository to the governance repository. [3] [4] What TC will check
are described in [5].

As of now, a summary of remaining processes to join the OpenStack,

 1.Approval process by TC
 2.PTL election.

Probably still I've been missing something. If there, please point
them out to me.

BTW since we've been already done some bug fixes, it's worth noticing
and keeping it mind [6].

[1] https://git.openstack.org/cgit/openstack/tricircle/tree/
[2] http://docs.openstack.org/developer/tricircle
[3] 
http://docs.openstack.org/project-team-guide/open-community.html#technical-committee-and-ptl-elections
[4] http://governance.openstack.org/
[5] http://governance.openstack.org/reference/new-projects-requirements.html
[6] http://governance.openstack.org/reference/project-testing-interface.html

Cheers,
Shinobu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tacker] No IRC weekly meeting this week

2016-05-01 Thread Sridhar Ramaswamy
Tackers -

Heads up. As decided in last meeting [1] we are skipping the weekly meeting
this week on Tuesday, May 3rd. We will resume the meeting starting next
week. There was a request in the design summit to move the meeting one hour
ahead to compensate for the daylight savings time. I'll send an update in
the next few days with the meeting details (specifically the channel).

[1]
http://eavesdrop.openstack.org/meetings/tacker/2016/tacker.2016-04-19-17.01.log.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote][kubernetes][infra] kolla-kubernetes repository management proposal up for vote

2016-05-01 Thread Swapnil Kulkarni
On Mon, May 2, 2016 at 10:08 AM, Vikram Hosakote (vhosakot)
 wrote:
> A separate repo will land us in the same spot as we had with kolla-mesos
> originally.  We had all kinds of variance in the implementation.
>
> I’m in favor of a single repo.
>
> +1 for the single repo.
>

I agree with you Vikram, but we should consider the bootstrapping
requirements for new deployment technologies and learn from our
failures with kolla-mesos.

At the same time, it will help us evaluate the deployment technologies
going ahead without distrupting the kolla repo which we can treat as a
repo with stable images & associated deployment tools.

> Regards,
> Vikram Hosakote
> IRC: vhosakot
>
> From: Vikram Hosakote 
> Date: Sunday, May 1, 2016 at 11:36 PM
>
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Subject: Re: [openstack-dev] [kolla][vote][kubernetes][infra]
> kolla-kubernetes repository management proposal up for vote
>
> Please add me too to the list!
>
> Regards,
> Vikram Hosakote
> IRC: vhosakot
>
> From: Michał Jastrzębski 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: Saturday, April 30, 2016 at 9:58 AM
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Subject: Re: [openstack-dev] [kolla][vote][kubernetes][infra]
> kolla-kubernetes repository management proposal up for vote
>
> Add me too please Steven.
>
> On 30 April 2016 at 09:50, Steven Dake (stdake)  wrote:
>
> Fellow core reviewers,
>
> We had a fantastic turnout at our fishbowl kubernetes as an underlay for
> Kolla session.  The etherpad documents the folks interested and discussion
> at summit[1].
>
> This proposal is mostly based upon a combination of several discussions at
> open design meetings coupled with the kubernetes underlay discussion.
>
> The proposal (and what we are voting on) is as follows:
>
> Folks in the following list will be added to a kolla-k8s-core group.
>
>   This kolla-k8s-core group will be responsible for code reviews and code
> submissions to the kolla repository for the /kubernetes top level directory.
> Individuals in kolla-k8s-core that consistently approve (+2) or disapprove
> with a (-2) votes to TLD directories other then kubernetes will be handled
> on a case by case basis with several "training warnings" followed by removal
> of the kolla-k8s-core group.  The kolla-k8s-core group will be added as a
> subgroup of the kolla-core reviewer team, which means they in effect have
> all of the ACL access as the existing kolla repository.  I think it is
> better in this case to trust these individuals to the right thing and only
> approve changes for the kubernetes directory until proposed for the
> kolla-core reviewer group where they can gate changes to any part of the
> repository.
>
> Britt Houser
>
> mark casey
>
> Steven Dake (delta-alpha-kilo-echo)
>
> Michael Schmidt
>
> Marian Schwarz
>
> Andrew Battye
>
> Kevin Fox (kfox)
>
> Sidharth Surana (ssurana)
>
>   Michal Rostecki (mrostecki)
>
>Swapnil Kulkarni (coolsvap)
>
>MD NADEEM (mail2nadeem92)
>
>Vikram Hosakote (vhosakot)
>
>Jeff Peeler (jpeeler)
>
>Martin Andre (mandre)
>
>Ian Main (Slower)
>
> Hui Kang (huikang)
>
> Serguei Bezverkhi (sbezverk)
>
> Alex Polvi (polvi)
>
> Rob Mason
>
> Alicja Kwasniewska
>
> sean mooney (sean-k-mooney)
>
> Keith Byrne (kbyrne)
>
> Zdenek Janda (xdeu)
>
> Brandon Jozsa (v1k0d3n)
>
> Rajath Agasthya (rajathagasthya)
>
>
> If you already are in the kolla-core review team, you won't be added to the
> kolla-k8s-core team as you will already have the necessary ACLs to do the
> job.  If you feel you would like to join this initial bootstrapping process,
> please add your name to the etherpad in [1].
>
> After 8 weeks (July 15h), folks that have not been actively reviewing or
> committing code will be removed from the kolla-k8s-core group.  We will use
> the governance repository metrics for team size [2] which is either 30
> reviews over 6 months (in this case, 10 reviews), or 6 commits over 6 months
> (in this case 2 commits) to the repository.  Folks that don't meet the
> qualifications are still welcome to commit to the repository and contribute
> code or documentation but will lose approval rights on patches.
>
> The kubernetes codebase will be maintained in the
> https://github.com/openstack/kolla repository under the kubernees top level
> directory.  Contributors that become active in the kolla repository itself
> will be proposed over time to the kolla-core group.  Only core-kolla members
> will be permitted to participate in policy decisions and voting thereof, so
> there is some minimal extra responsibility involved in joining the
> kolla-core ACL team for those folks wanting to move into the kolla core team
> over time.  The goal will be 

Re: [openstack-dev] [kolla][vote][kubernetes][infra] kolla-kubernetes repository management proposal up for vote

2016-05-01 Thread Vikram Hosakote (vhosakot)
A separate repo will land us in the same spot as we had with kolla-mesos
originally.  We had all kinds of variance in the implementation.

I’m in favor of a single repo.

+1 for the single repo.

Regards,
Vikram Hosakote
IRC: vhosakot

From: Vikram Hosakote >
Date: Sunday, May 1, 2016 at 11:36 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [kolla][vote][kubernetes][infra] kolla-kubernetes 
repository management proposal up for vote

Please add me too to the list!

Regards,
Vikram Hosakote
IRC: vhosakot

From: Michał Jastrzębski >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Saturday, April 30, 2016 at 9:58 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [kolla][vote][kubernetes][infra] kolla-kubernetes 
repository management proposal up for vote

Add me too please Steven.

On 30 April 2016 at 09:50, Steven Dake (stdake) 
> wrote:
Fellow core reviewers,

We had a fantastic turnout at our fishbowl kubernetes as an underlay for
Kolla session.  The etherpad documents the folks interested and discussion
at summit[1].

This proposal is mostly based upon a combination of several discussions at
open design meetings coupled with the kubernetes underlay discussion.

The proposal (and what we are voting on) is as follows:

Folks in the following list will be added to a kolla-k8s-core group.

  This kolla-k8s-core group will be responsible for code reviews and code
submissions to the kolla repository for the /kubernetes top level directory.
Individuals in kolla-k8s-core that consistently approve (+2) or disapprove
with a (-2) votes to TLD directories other then kubernetes will be handled
on a case by case basis with several "training warnings" followed by removal
of the kolla-k8s-core group.  The kolla-k8s-core group will be added as a
subgroup of the kolla-core reviewer team, which means they in effect have
all of the ACL access as the existing kolla repository.  I think it is
better in this case to trust these individuals to the right thing and only
approve changes for the kubernetes directory until proposed for the
kolla-core reviewer group where they can gate changes to any part of the
repository.

Britt Houser

mark casey

Steven Dake (delta-alpha-kilo-echo)

Michael Schmidt

Marian Schwarz

Andrew Battye

Kevin Fox (kfox)

Sidharth Surana (ssurana)

  Michal Rostecki (mrostecki)

   Swapnil Kulkarni (coolsvap)

   MD NADEEM (mail2nadeem92)

   Vikram Hosakote (vhosakot)

   Jeff Peeler (jpeeler)

   Martin Andre (mandre)

   Ian Main (Slower)

Hui Kang (huikang)

Serguei Bezverkhi (sbezverk)

Alex Polvi (polvi)

Rob Mason

Alicja Kwasniewska

sean mooney (sean-k-mooney)

Keith Byrne (kbyrne)

Zdenek Janda (xdeu)

Brandon Jozsa (v1k0d3n)

Rajath Agasthya (rajathagasthya)


If you already are in the kolla-core review team, you won't be added to the
kolla-k8s-core team as you will already have the necessary ACLs to do the
job.  If you feel you would like to join this initial bootstrapping process,
please add your name to the etherpad in [1].

After 8 weeks (July 15h), folks that have not been actively reviewing or
committing code will be removed from the kolla-k8s-core group.  We will use
the governance repository metrics for team size [2] which is either 30
reviews over 6 months (in this case, 10 reviews), or 6 commits over 6 months
(in this case 2 commits) to the repository.  Folks that don't meet the
qualifications are still welcome to commit to the repository and contribute
code or documentation but will lose approval rights on patches.

The kubernetes codebase will be maintained in the
https://github.com/openstack/kolla repository under the kubernees top level
directory.  Contributors that become active in the kolla repository itself
will be proposed over time to the kolla-core group.  Only core-kolla members
will be permitted to participate in policy decisions and voting thereof, so
there is some minimal extra responsibility involved in joining the
kolla-core ACL team for those folks wanting to move into the kolla core team
over time.  The goal will be to over time entirely remove the kolla-k8s-core
team and make one core reviewer team in the kolla-core ACL.

Members in the kolla-k8s-core group will have the ability to +2 or –2 any
change to the main kolla repository via ACLs, however, I propose we trust
these folks to only +2/-2 changes related to the kubernetes directory in the
kolla repository and remove folks that consistently break this agreement.
Initial errors as folks learn the system will be tolerated 

Re: [openstack-dev] [kolla][vote][kubernetes][infra] kolla-kubernetes repository management proposal up for vote

2016-05-01 Thread Vikram Hosakote (vhosakot)
Please add me too to the list!

Regards,
Vikram Hosakote
IRC: vhosakot

From: Michał Jastrzębski >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Saturday, April 30, 2016 at 9:58 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [kolla][vote][kubernetes][infra] kolla-kubernetes 
repository management proposal up for vote

Add me too please Steven.

On 30 April 2016 at 09:50, Steven Dake (stdake) 
> wrote:
Fellow core reviewers,

We had a fantastic turnout at our fishbowl kubernetes as an underlay for
Kolla session.  The etherpad documents the folks interested and discussion
at summit[1].

This proposal is mostly based upon a combination of several discussions at
open design meetings coupled with the kubernetes underlay discussion.

The proposal (and what we are voting on) is as follows:

Folks in the following list will be added to a kolla-k8s-core group.

  This kolla-k8s-core group will be responsible for code reviews and code
submissions to the kolla repository for the /kubernetes top level directory.
Individuals in kolla-k8s-core that consistently approve (+2) or disapprove
with a (-2) votes to TLD directories other then kubernetes will be handled
on a case by case basis with several "training warnings" followed by removal
of the kolla-k8s-core group.  The kolla-k8s-core group will be added as a
subgroup of the kolla-core reviewer team, which means they in effect have
all of the ACL access as the existing kolla repository.  I think it is
better in this case to trust these individuals to the right thing and only
approve changes for the kubernetes directory until proposed for the
kolla-core reviewer group where they can gate changes to any part of the
repository.

Britt Houser

mark casey

Steven Dake (delta-alpha-kilo-echo)

Michael Schmidt

Marian Schwarz

Andrew Battye

Kevin Fox (kfox)

Sidharth Surana (ssurana)

  Michal Rostecki (mrostecki)

   Swapnil Kulkarni (coolsvap)

   MD NADEEM (mail2nadeem92)

   Vikram Hosakote (vhosakot)

   Jeff Peeler (jpeeler)

   Martin Andre (mandre)

   Ian Main (Slower)

Hui Kang (huikang)

Serguei Bezverkhi (sbezverk)

Alex Polvi (polvi)

Rob Mason

Alicja Kwasniewska

sean mooney (sean-k-mooney)

Keith Byrne (kbyrne)

Zdenek Janda (xdeu)

Brandon Jozsa (v1k0d3n)

Rajath Agasthya (rajathagasthya)


If you already are in the kolla-core review team, you won't be added to the
kolla-k8s-core team as you will already have the necessary ACLs to do the
job.  If you feel you would like to join this initial bootstrapping process,
please add your name to the etherpad in [1].

After 8 weeks (July 15h), folks that have not been actively reviewing or
committing code will be removed from the kolla-k8s-core group.  We will use
the governance repository metrics for team size [2] which is either 30
reviews over 6 months (in this case, 10 reviews), or 6 commits over 6 months
(in this case 2 commits) to the repository.  Folks that don't meet the
qualifications are still welcome to commit to the repository and contribute
code or documentation but will lose approval rights on patches.

The kubernetes codebase will be maintained in the
https://github.com/openstack/kolla repository under the kubernees top level
directory.  Contributors that become active in the kolla repository itself
will be proposed over time to the kolla-core group.  Only core-kolla members
will be permitted to participate in policy decisions and voting thereof, so
there is some minimal extra responsibility involved in joining the
kolla-core ACL team for those folks wanting to move into the kolla core team
over time.  The goal will be to over time entirely remove the kolla-k8s-core
team and make one core reviewer team in the kolla-core ACL.

Members in the kolla-k8s-core group will have the ability to +2 or –2 any
change to the main kolla repository via ACLs, however, I propose we trust
these folks to only +2/-2 changes related to the kubernetes directory in the
kolla repository and remove folks that consistently break this agreement.
Initial errors as folks learn the system will be tolerated and commits
reverted as makes sense.

I feel we made a couple errors with the creation of Kolla-mesos that I feel
needs correction.  Our first error the kolla-mesos-core team made a lack of
a diversely affiliated team membership developing the code base.  The above
list has significant diversity.  The second error is that the repository was
split in the first place.  This resulted in a separate ABI to the containers
being implemented which was a sore spot for me personally.  We did our best
to build both sides of the bridge here, but this time I'd like the bridge
between these two interests and set of 

Re: [openstack-dev] [kolla][kubernetes] One repo vs two

2016-05-01 Thread Britt Houser (bhouser)
Although it seems I'm in the minority, I am in favor of unified repo.

From: "Steven Dake (stdake)" >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Sunday, May 1, 2016 at 5:03 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [kolla][kubernetes] One repo vs two

Ryan had rightly pointed out that when we made the original proposal 9am 
morning we had asked folks if they wanted to participate in a separate 
repository.

I don't think a separate repository is the correct approach based upon one off 
private conversations with folks at summit.  Many people from that list 
approached me and indicated they would like to see the work integrated in one 
repository as outlined in my vote proposal email.  The reasons I heard were:

  *   Better integration of the community
  *   Better integration of the code base
  *   Doesn't present an us vs them mentality that one could argue happened 
during kolla-mesos
  *   A second repository makes k8s a second class citizen deployment 
architecture without a voice in the full deployment methodology
  *   Two gating methods versus one
  *   No going back to a unified repository while preserving git history

I favor of the separate repositories I heard

  *   It presents a unified workspace for kubernetes alone
  *   Packaging without ansible is simpler as the ansible directory need not be 
deleted

There were other complaints but not many pros.  Unfortunately I failed to 
communicate these complaints to the core team prior to the vote, so now is the 
time for fixing that.

I'll leave it open to the new folks that want to do the work if they want to 
work on an offshoot repository and open us up to the possible problems above.

If you are on this list:


  *   Ryan Hallisey
  *   Britt Houser

  *   mark casey

  *   Steven Dake (delta-alpha-kilo-echo)

  *   Michael Schmidt

  *   Marian Schwarz

  *   Andrew Battye

  *   Kevin Fox (kfox)

  *   Sidharth Surana (ssurana)

  *Michal Rostecki (mrostecki)

  * Swapnil Kulkarni (coolsvap)

  * MD NADEEM (mail2nadeem92)

  * Vikram Hosakote (vhosakot)

  * Jeff Peeler (jpeeler)

  * Martin Andre (mandre)

  * Ian Main (Slower)

  *   Hui Kang (huikang)

  *   Serguei Bezverkhi (sbezverk)

  *   Alex Polvi (polvi)

  *   Rob Mason

  *   Alicja Kwasniewska

  *   sean mooney (sean-k-mooney)

  *   Keith Byrne (kbyrne)

  *   Zdenek Janda (xdeu)

  *   Brandon Jozsa (v1k0d3n)

  *   Rajath Agasthya (rajathagasthya)
  *   Jinay Vora
  *   Hui Kang
  *   Davanum Srinivas


Please speak up if you are in favor of a separate repository or a unified 
repository.

The core reviewers will still take responsibility for determining if we proceed 
on the action of implementing kubernetes in general.

Thank you
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Libvirt version requirement

2016-05-01 Thread ZhiQiang Fan
Hi Nova cores,

There is a spec[1] submitted to Telemetry project for Newton release,
mentioned that a new feature requires libvirt >= 1.3.4 , I'm not sure if
this will have bad impact to Nova service, so I open this thread and wait
for your opinions.

[1]: https://review.openstack.org/#/c/311655/

Thanks!
ZhiQiang Fan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Austin summit scheduler session recap

2016-05-01 Thread Matt Riedemann

On 5/1/2016 6:46 PM, Matt Riedemann wrote:

On Wednesday morning Jay Pipes led a double session on the work going on
in the Nova scheduler. The session etherpad is here [1].

Jay started off by taking us through a high-level overview of what was
completed for the quantitative changes:

1. Resource classes

2. Resource providers

3. Online data migration of compute node RAM/CPU/disk inventory.

Then Jay talked through the in-progress quantitative changes:

1. Online data migration of allocation fields for instances.

2. Modeling of generic resource pools for things like shared storage and
IP subnet allocation pools (for Neutron routed networks).

3. Creating a separate placement REST API endpoint for generic resource
pools.

4. Cleanup/refactor around PCI device handling.

For the quantitative changes, we still need to migrate the PCI and NUMA
fields.

In the second scheduler session we got more into the capabilities
(qualitative) part of the request. For example, this could be exposing
hypervisor capabilities from the compute node through the API or for
scheduling.

Jay pointed out there have been several blueprints posted over time
related to this.

We discussed just what a capability is, i.e. is a hypervisor version a
capability since certain features are only exposed with certain
versions?  For libvirt it isn't really. For example, you can only set
the admin password in a guest if you have libvirt >=1.2.16. But for
hyper-v there are features which are supported by both older and newer
versions of the hypervisor but are generally considered better or more
robust in the newer version. But for some things, like supporting
hyperv-gen2 instances, we consider that 'supports-hyperv-gen2' is the
capability. We can further break that down by value enums if we need
more granular information about the capability.

While informative, we left the session with some unanswered next steps:

1. Where to put the inventories/allocations tables. We have three
choices: API DB, Nova (cell) DB, or a new placement DB. Leaving them in
the cell DB would mean aggregating data which would be a mess. Putting
them in a new placement DB would mean a 2nd new database in a short
number of releases for deployers to manage. So I believe we agreed to
put them in the API DB (but if others think otherwise please speak up).

2. We talked for quite awhile about what a capability is and isn't, but
I didn't come away with a definitive answer. This might get teased out
in Claudiu's spec [2]. Note, however, that on Friday we agreed that as
far as microversions are concerned, a new capability exposed in the REST
API requires a microversion. But new enumerations for a capability, e.g.
CPU features, do not require a new microversion bump, there are just too
many of them.

3. I think we're in agreement on the blueprints/roadmap that Jay has put
forth, but it's unclear if we have an owner for each blueprint. Jay and
Chris Dent own several of these and some others are helping out (Dan
Smith has been doing a lot of the online data migration patches), but we
don't have owners for everything.

4. We need to close out any obsolete blueprints from the list that Jay
had in the etherpad. As already noted, several of these are older and
probably superseded by current work, so the team just needs flush these
out.

5. Getting approval on the generic-resource-pools,
resource-providers-allocations, standardizing capabilities (extra
specs). The first two are pretty clear at this point, the specs just
need to be rebased and reviewed in the next week. A lot of the code is
already up for review. We're less clear on standardizing capabilities.

--

So the focus for the immediate future is going to have to be on
completing the resource providers, inventory and allocation data
migration code and generic resource pools. That's all the quantitative
work and if we can push to get a lot of that done before the midcycle
meetup it would be great, then we can see where we sit and discuss more
about capabilities.

Jay - please correct or add to anything above.

[1] https://etherpad.openstack.org/p/newton-nova-scheduler
[2] https://review.openstack.org/#/c/286520/



I forgot to mention that toward the end of the second scheduler session, 
Yingxin Cheng from Intel gave a short presentation [1] on some 
performance improvements he's seen with the 'eventually consistent' host 
shared state scheduler prototype.


He had a particularly interesting slide (7) with a performance 
comparison of the default filter scheduler configuration vs the caching 
scheduler vs the eventually consistent prototype scheduler. The latter 
two out-performed the default configuration in his testing.


A TODO from the presentation was for Yingxin to pre-load some of the 
computes used in the test and see how the prototype works with handling 
those pre-loaded computes.


[1] 
https://docs.google.com/presentation/d/1UG1HkEWyxPVMXseLwJ44ZDm-ek_MPc4M65H8EiwZnWs/edit?ts=571fcdd5#slide=id.g12d2cf15cd_2_90



[openstack-dev] [nova] Austin summit performance VMs CI and technical debt session recap

2016-05-01 Thread Matt Riedemann
On Wednesday morning we discussed the state of performance VMs CI and 
technical debt. Performance VMs are more commonly known as those taking 
advantage of network function virtualization (NFV) features, like 
SR-IOV, PCI, NUMA, CPU pinning and huge pages. The full etherpad is here 
[1].


The session started out with a recap of the existing CI testing we have 
in Nova today for NFV:


1. Intel PCI CI - pretty basic custom test(s) of booting an instance 
with a PCI device flavor and then SSHing into the guest to ensure the 
device shows up.


2. Mellanox SR-IOV CI for macvtap - networking scenario tests in Tempest 
using an SR-IOV port of type 'macvtap'.


3. Mellanox SR-IOV CI for direct - networking scenario tests in Tempest 
using an SR-IOV port of type 'direct'.


4. Intel NFV CI - custom API tests in a Tempest plugin using flavors 
that have NUMA, CPU pinning and Huge Pages extra specs.


We then talked about gaps in testing of NFV features, the major ones being:

1. Intel NFV CI is single-node so we don't expose bugs with scheduling 
to multiple computes (we had a major bug in Nova where we'd only ever 
schedule to a single compute when using NUMA). We could potentially test 
some of this with an in-tree functional test.


2. We don't have any testing for SR-IOV ports of type 'direct-physical' 
which was recently added but is buggy.


3. We don't have any testing for resize/migrate with a different PCI 
device flavor, and according to Moshe Levi from Mellanox it's never 
worked, or he doesn't see how it could have. Testing this properly would 
require a multinode devstack job, which we don't have for any of the NFV 
third party CI today. Moshe has a patch up to fix the bug [2] but 
long-term we really need CI testing for this so we don't regress it.


4. ovs-dpdk has limited testing in Nova today. The Intel Networking CI 
job runs it on any changes to nova/virt/libvirt/vif.py and on Neutron 
changes. I've asked that the module whitelist be expanded for Nova 
changes to run these tests. It also sounds like it's going to be run on 
os-vif changes, so once we integrate os-vif for ovs-dpdk we'll have some 
coverage there.


5. In general we have issues with the NFV CI systems:

a) There are different teams running the different Intel CI jobs, so 
communication and status reporting can be difficult. Sean Mooney said 
that his team might be consolidating and owning some of the various 
jobs, so that should help.


b) The Mellanox CI jobs are running on dedicated hardware and doing 
cleanups of the host between runs, but this can miss things. The Intel 
CI guys said that they use privileged containers to get around this type 
of issue. It would be great if the various teams running these CIs could 
share what they are doing and best practices, tooling, etc.


c) We might be able to run some of the Intel NFV CI testing in the 
community infra since some of the public cloud providers being used 
allow nested virt. However, Clark Boylan reported that they have noticed 
very strange and abrupt crashes when running in these modes, so right 
now the stability is in question. Sean Mooney from Intel said that they 
could look into upstreaming some of their CI to community infra. We 
could also get an experimental job setup to see how stable it is and 
tease out the issues.


--

Beyond CI testing we also talked about the gap in upstream 
documentation. The good news is there is more documentation upstream 
than I was aware of. The neutron networking guide has information on 
configuring nova/neutron for using SR-IOV. The admin guide has some good 
information on CPU pinning and large pages, and some documentation for 
some of the more widely used flavor extra specs, but is by no means 
exhaustive - or clear on when a flavor extra spec or image metadata is used.


Stephen Finucane and Ludovic Beliveau volunteered to help work on the 
documentation.


--

One of the takeaways from this session was the clear lack of NFV users 
and people from the OPNFV community in the room. At one point someone 
asked for anyone from those groups to raise their hand and maybe one 
person did. There are surely developers involved, like Moshe, Sean, 
Stephen and Ludovic, but we still have a gap between the companies 
pushing for these features and the developers doing the work. That's one 
of the reasons why the core team consistently makes NFV support a lower 
priority. Part of the issue might simply be that those stakeholders are 
in different track sessions at the same time as the design summit. But I 
and some others from the core team were in an NFV luncheon on Monday to 
talk about what the NFV community can do to be more involved and we went 
over some of the above and pointed out this very session to attend, and 
it didn't seem to change that since the NFV stakeholders in that 
luncheon didn't attend the design session.


--

On Friday during the meetup session we briefly discussed FPGAs and 
similar acceleration-type 

Re: [openstack-dev] [neutron] OSC transition

2016-05-01 Thread Na Zhu
Hi Richard,

So what is the conclusion of where to put *aas CLI? 



Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:   Richard Theis 
To: "OpenStack Development Mailing List \(not for usage questions\)" 

Date:   2016/04/26 22:32
Subject:Re: [openstack-dev] [neutron] OSC transition



Hi, 

The latest devref [1] would place it in python-neutronclient as Henry 
noted. But stay tuned for results from the summit session. 

[1] 
https://github.com/openstack/python-neutronclient/blob/master/doc/source/devref/transition_to_osc.rst
 


- Richard


"Na Zhu"  wrote on 04/26/2016 08:29:21 AM:

> From: "Na Zhu"  
> To: hen...@gessau.net 
> Cc: "OpenStack Development Mailing List \(not for usage questions\)"
>  
> Date: 04/26/2016 08:34 AM 
> Subject: Re: [openstack-dev] [neutron] OSC transition 
> 
> Hi Henry,
> 
> Thanks your information, why you think neutron-dynamic-routing CLI 
> should live in python-neutronclient?
> From this link http://docs.openstack.org/developer/python-
> neutronclient/devref/transition_to_osc.htmlsection "Where does my CLI 
belong?
> ", *aas CLI belongs to their own project, not project python-
> neutronclient. BGP is also service like *aas, so I think BGP CLIs 
> should live in neutron-dynamic-routing, or a separate repo named 
> python-*client. Pls correct me if I am wrong, thanks.
> 
> 
> 
> Regards,
> Juno Zhu
> IBM China Development Labs (CDL) Cloud IaaS Lab
> Email: na...@cn.ibm.com
> 5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong 
> New District, Shanghai, China (201203)
> 
> 
> 
> From:Henry Gessau 
> To:"OpenStack Development Mailing List (not for usage 
> questions)" 
> Date:2016/04/26 21:09
> Subject:Re: [openstack-dev] [neutron] OSC transition
> 
> 
> 
> Adding the [neutron] tag.
> 
> I believe that the OSC extension for neutron-dynamic-routing should live 
in
> the python-neutronclient repo. Keep in touch with Richard Theis as he is 
the
> one leading the transition to OSC. He is rtheis on IRC.
> 
> See:
> 
http://lists.openstack.org/pipermail/openstack-dev/2016-April/093139.html
> https://review.openstack.org/309587
> 
> 
> Na Zhu  wrote:
> > Dear All,
> > 
> > 
> > I have a question about OSC transition, recently, the community 
approves
> > moving bgp out of neutron, as a service like other *aas. The BGP 
> CLIs need be
> > removed from neutronclient. Because of OSC transition, I can not 
> just move the
> > BGP CLIs code from python-neutronclient repo to neutron-dynamic-
> routing repo.
> > I have to refactor the code and do transition to OSC plugin system.
> > 
> > From the
> > link _http://docs.openstack.org/developer/python-openstackclient/
> plugins.html_, the
> > client has a separate repo, take designate as example, the CLI repo is
> > python-designateclient, the project repo is designate. So for BGP, 
should I
> > create a repo for CLI, or leverage project repo 
neutron-dynamic-routing?
> > 
> > 
> > 
> > 
> > Regards,
> > Juno Zhu
> > IBM China Development Labs (CDL) Cloud IaaS Lab
> > Email: na...@cn.ibm.com
> > 5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New
> > District, Shanghai, China (201203)
> > 
> > 
> > 
__
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> 
> 
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kubernetes] One repo vs two

2016-05-01 Thread Michał Jastrzębski
I think merging 2 repos is possible with keeping a history. Tonyb also
said that there will not be any issues with releasing z-streams. I
recall we kinda made a decision on summit that we keep ansible in
kolla tree as long as it's only stable deployment orchiestration tool,
and when second one appears (kube?), then we'll make a call about
merging it into main repo. I am also in favor of having one repo and
one core team, but for me it's important that everything we have in
main tree is stable and prod ready, which won't be the case for kube
for some time.

Cheers,
Michal

On 1 May 2016 at 19:56, Davanum Srinivas  wrote:
> Steve,
>
> Thanks for bringing up this decision-making to the open forum.
>
> This is a tough decision. "us vs them" i am hoping will not happen
> this time around as we'll watch out for that. since there has been
> talk about ansible getting split out eventually, splitting k8s into a
> separate repo would be the right decision. If we do need git surgery
> down the line, we can call on experts in our community :)
>
> So +1 to a separate repo.
>
> Thanks,
> Dims
>
>
> On Sun, May 1, 2016 at 4:03 PM, Steven Dake (stdake)  wrote:
>> Ryan had rightly pointed out that when we made the original proposal 9am
>> morning we had asked folks if they wanted to participate in a separate
>> repository.
>>
>> I don't think a separate repository is the correct approach based upon one
>> off private conversations with folks at summit.  Many people from that list
>> approached me and indicated they would like to see the work integrated in
>> one repository as outlined in my vote proposal email.  The reasons I heard
>> were:
>>
>> Better integration of the community
>> Better integration of the code base
>> Doesn't present an us vs them mentality that one could argue happened during
>> kolla-mesos
>> A second repository makes k8s a second class citizen deployment architecture
>> without a voice in the full deployment methodology
>> Two gating methods versus one
>> No going back to a unified repository while preserving git history
>>
>> I favor of the separate repositories I heard
>>
>> It presents a unified workspace for kubernetes alone
>> Packaging without ansible is simpler as the ansible directory need not be
>> deleted
>>
>> There were other complaints but not many pros.  Unfortunately I failed to
>> communicate these complaints to the core team prior to the vote, so now is
>> the time for fixing that.
>>
>> I'll leave it open to the new folks that want to do the work if they want to
>> work on an offshoot repository and open us up to the possible problems
>> above.
>>
>> If you are on this list:
>>
>> Ryan Hallisey
>> Britt Houser
>>
>> mark casey
>>
>> Steven Dake (delta-alpha-kilo-echo)
>>
>> Michael Schmidt
>>
>> Marian Schwarz
>>
>> Andrew Battye
>>
>> Kevin Fox (kfox)
>>
>> Sidharth Surana (ssurana)
>>
>>  Michal Rostecki (mrostecki)
>>
>>   Swapnil Kulkarni (coolsvap)
>>
>>   MD NADEEM (mail2nadeem92)
>>
>>   Vikram Hosakote (vhosakot)
>>
>>   Jeff Peeler (jpeeler)
>>
>>   Martin Andre (mandre)
>>
>>   Ian Main (Slower)
>>
>> Hui Kang (huikang)
>>
>> Serguei Bezverkhi (sbezverk)
>>
>> Alex Polvi (polvi)
>>
>> Rob Mason
>>
>> Alicja Kwasniewska
>>
>> sean mooney (sean-k-mooney)
>>
>> Keith Byrne (kbyrne)
>>
>> Zdenek Janda (xdeu)
>>
>> Brandon Jozsa (v1k0d3n)
>>
>> Rajath Agasthya (rajathagasthya)
>> Jinay Vora
>> Hui Kang
>> Davanum Srinivas
>>
>>
>>
>> Please speak up if you are in favor of a separate repository or a unified
>> repository.
>>
>> The core reviewers will still take responsibility for determining if we
>> proceed on the action of implementing kubernetes in general.
>>
>> Thank you
>> -steve
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote][kubernetes][infra] kolla-kubernetes repository management proposal up for vote

2016-05-01 Thread Swapnil Kulkarni
On Sat, Apr 30, 2016 at 8:20 PM, Steven Dake (stdake)  wrote:
> Fellow core reviewers,
>
> We had a fantastic turnout at our fishbowl kubernetes as an underlay for
> Kolla session.  The etherpad documents the folks interested and discussion
> at summit[1].
>
> This proposal is mostly based upon a combination of several discussions at
> open design meetings coupled with the kubernetes underlay discussion.
>
> The proposal (and what we are voting on) is as follows:
>
> Folks in the following list will be added to a kolla-k8s-core group.
>
>  This kolla-k8s-core group will be responsible for code reviews and code
> submissions to the kolla repository for the /kubernetes top level directory.
> Individuals in kolla-k8s-core that consistently approve (+2) or disapprove
> with a (-2) votes to TLD directories other then kubernetes will be handled
> on a case by case basis with several "training warnings" followed by removal
> of the kolla-k8s-core group.  The kolla-k8s-core group will be added as a
> subgroup of the kolla-core reviewer team, which means they in effect have
> all of the ACL access as the existing kolla repository.  I think it is
> better in this case to trust these individuals to the right thing and only
> approve changes for the kubernetes directory until proposed for the
> kolla-core reviewer group where they can gate changes to any part of the
> repository.
>
> Britt Houser
>
> mark casey
>
> Steven Dake (delta-alpha-kilo-echo)
>
> Michael Schmidt
>
> Marian Schwarz
>
> Andrew Battye
>
> Kevin Fox (kfox)
>
> Sidharth Surana (ssurana)
>
>  Michal Rostecki (mrostecki)
>
>   Swapnil Kulkarni (coolsvap)
>
>   MD NADEEM (mail2nadeem92)
>
>   Vikram Hosakote (vhosakot)
>
>   Jeff Peeler (jpeeler)
>
>   Martin Andre (mandre)
>
>   Ian Main (Slower)
>
> Hui Kang (huikang)
>
> Serguei Bezverkhi (sbezverk)
>
> Alex Polvi (polvi)
>
> Rob Mason
>
> Alicja Kwasniewska
>
> sean mooney (sean-k-mooney)
>
> Keith Byrne (kbyrne)
>
> Zdenek Janda (xdeu)
>
> Brandon Jozsa (v1k0d3n)
>
> Rajath Agasthya (rajathagasthya)
>
>
> If you already are in the kolla-core review team, you won't be added to the
> kolla-k8s-core team as you will already have the necessary ACLs to do the
> job.  If you feel you would like to join this initial bootstrapping process,
> please add your name to the etherpad in [1].
>
> After 8 weeks (July 15h), folks that have not been actively reviewing or
> committing code will be removed from the kolla-k8s-core group.  We will use
> the governance repository metrics for team size [2] which is either 30
> reviews over 6 months (in this case, 10 reviews), or 6 commits over 6 months
> (in this case 2 commits) to the repository.  Folks that don't meet the
> qualifications are still welcome to commit to the repository and contribute
> code or documentation but will lose approval rights on patches.
>
> The kubernetes codebase will be maintained in the
> https://github.com/openstack/kolla repository under the kubernees top level
> directory.  Contributors that become active in the kolla repository itself
> will be proposed over time to the kolla-core group.  Only core-kolla members
> will be permitted to participate in policy decisions and voting thereof, so
> there is some minimal extra responsibility involved in joining the
> kolla-core ACL team for those folks wanting to move into the kolla core team
> over time.  The goal will be to over time entirely remove the kolla-k8s-core
> team and make one core reviewer team in the kolla-core ACL.
>
> Members in the kolla-k8s-core group will have the ability to +2 or –2 any
> change to the main kolla repository via ACLs, however, I propose we trust
> these folks to only +2/-2 changes related to the kubernetes directory in the
> kolla repository and remove folks that consistently break this agreement.
> Initial errors as folks learn the system will be tolerated and commits
> reverted as makes sense.
>
> I feel we made a couple errors with the creation of Kolla-mesos that I feel
> needs correction.  Our first error the kolla-mesos-core team made a lack of
> a diversely affiliated team membership developing the code base.  The above
> list has significant diversity.  The second error is that the repository was
> split in the first place.  This resulted in a separate ABI to the containers
> being implemented which was a sore spot for me personally.  We did our best
> to build both sides of the bridge here, but this time I'd like the bridge
> between these two interests and set of individuals to be fully built before
> beginning.  As such, I'd ask the existing kolla-core team to trust my
> judgement on this point and roll with it.  We can always change the
> structure later if this model doesn't work out as I expect it will, but if
> we started with split repos and a changed structure to begin with, we can't
> go back to a non-split repo as the action is irreversible according to dims.
>
> I know this proposal may seem uncomfortable for our 

Re: [openstack-dev] [nova][newton] Austin summit nova/newton cross-project session recap

2016-05-01 Thread Jay Pipes
Matt just a quick top-post to say thank you very much for this status 
report as well as the scheduler session status report. Really appreciate 
the help.


Best,
-jay

On 05/01/2016 09:01 PM, Matt Riedemann wrote:

On Wednesday morning the Nova and Neutron teams got together for a
design summit session. The full etherpad is here [1].

We talked through three major items.

1. Neutron routed networks.

Carl Baldwin gave a quick recap that we're on track with the Nova spec
[2] and had pushed a new revision which addressed Dan Smith's latest
comments. The spec is highly dependent on Jay Pipes'
generic-resource-pools spec which needs to be rebated, and then
hopefully we can approve that this week and the routed networks one
shortly thereafter.

We spent some time with Dan Smith sketching out his idea for moving the
neutron network allocation code from the nova compute node to conductor.
This would help with a few things:

a) Doing the allocation earlier in the process so it's less expensive if
we fail on the compute and get into a retry loop.

b) It should clean up a bunch of the allocation code that's in the
network API today, so we can separate the allocation logic from the
check/update logic. This would mean that by the time we get to the
compute the ports are already allocated and we just have to check back
with Neutron that they are still correct and update their details. And
that would also mean by the time we get to the compute it looks the same
whether the user provided the port at boot time or Nova allocated it.

c) Nova can update it's allocation tables before scheduling to make a
more informed decision about where to place the instance based on what
Neutron has already told us is available.

John Garbutt is planning on working on doing this cleanup/refactor to
move parts of the network allocation code from the compute to conductor.
We'll most likely need a spec for this work.

2. Get Me a Network

We really just talked about two items here:

a) With the microversion, if the user requests 'auto' network allocation
and there are no available networks for the project and dry-run
validation for auto-allocated-topology fails on the Neutron side (the
default public network and subnet pool aren't setup), we'll fail the API
request with a 409. I had asked if we should fall back to the existing
behavior of just not allocating networking, but we decided that it will
be better to be explicit about a failure if you're requesting 'auto'. In
most cases projects already have a network available to them when their
cloud provider sets up their project, so they won't even get to the
auto-allocated network topology code being written for the spec. But if
not, it's a failure and not allocating networking is just...weird. Plus
you can opt into the 'none' behavior with the microversion if that's
what you really want.

b) There were some questions about making get-me-a-network more advanced
than the networking that is setup today (a tenant network behind a
router). The agreement was that get-me-a-network is for the case that
the user doesn't care, they just want networking for their instance in
Nova. Anything that's more advanced should be pre-allocated in Neutron
and the instance in Nova should be booted with the network/port that was
pre-allocated in Neutron. There might be future changes/customization to
the type of network created from the auto-allocated-topology API in
Neutron, but that should be dealt with only in Neutron and not a concern
of Nova.

3. Deprecating nova-network.

The rest of the session was spent discussing the (re)deprecation of
nova-network. Given the recent couple of user surveys, it's clear that
deployments have shifted to using Neutron.

We have some gaps in the Nova REST API but we can work each of those on
a case-by-case basis. For example, we won't implement the
os-virtual-interfaces API for Neutron. Today it returns a 400, that
could maybe use a more appropriate error code, but it won't be changed
to be a 200. And for the os-limits API which returns some compute and
network resource quota limits info, we can microversion it to simply not
return the network resources if you're using Neutron. Once we drop
nova-network we'll update the API again to not return those network
resources at all, you'll get them from Neutron (if you aren't already).

We also decided it's not worth deprecating nova-network in pieces since
it gets messy and something might force us to add feature parity if it's
not deprecated outright, like cells v2.

And we said it's not worth splitting it out into it's own repo since
that has cost of it's own to maintain. If people want to fork the repo
to keep using it, that's on them but it won't be supported by the Nova
team once it's removed.

So given the above, Sean proposed the deprecation patch [3] which by now
is already (eagerly) approved. Note that there isn't a timetable on the
actual removal, it could be as early as Ocata, but we need to address
the REST API gaps and 

[openstack-dev] [nova][newton] Austin summit nova/newton cross-project session recap

2016-05-01 Thread Matt Riedemann
On Wednesday morning the Nova and Neutron teams got together for a 
design summit session. The full etherpad is here [1].


We talked through three major items.

1. Neutron routed networks.

Carl Baldwin gave a quick recap that we're on track with the Nova spec 
[2] and had pushed a new revision which addressed Dan Smith's latest 
comments. The spec is highly dependent on Jay Pipes' 
generic-resource-pools spec which needs to be rebated, and then 
hopefully we can approve that this week and the routed networks one 
shortly thereafter.


We spent some time with Dan Smith sketching out his idea for moving the 
neutron network allocation code from the nova compute node to conductor. 
This would help with a few things:


a) Doing the allocation earlier in the process so it's less expensive if 
we fail on the compute and get into a retry loop.


b) It should clean up a bunch of the allocation code that's in the 
network API today, so we can separate the allocation logic from the 
check/update logic. This would mean that by the time we get to the 
compute the ports are already allocated and we just have to check back 
with Neutron that they are still correct and update their details. And 
that would also mean by the time we get to the compute it looks the same 
whether the user provided the port at boot time or Nova allocated it.


c) Nova can update it's allocation tables before scheduling to make a 
more informed decision about where to place the instance based on what 
Neutron has already told us is available.


John Garbutt is planning on working on doing this cleanup/refactor to 
move parts of the network allocation code from the compute to conductor. 
We'll most likely need a spec for this work.


2. Get Me a Network

We really just talked about two items here:

a) With the microversion, if the user requests 'auto' network allocation 
and there are no available networks for the project and dry-run 
validation for auto-allocated-topology fails on the Neutron side (the 
default public network and subnet pool aren't setup), we'll fail the API 
request with a 409. I had asked if we should fall back to the existing 
behavior of just not allocating networking, but we decided that it will 
be better to be explicit about a failure if you're requesting 'auto'. In 
most cases projects already have a network available to them when their 
cloud provider sets up their project, so they won't even get to the 
auto-allocated network topology code being written for the spec. But if 
not, it's a failure and not allocating networking is just...weird. Plus 
you can opt into the 'none' behavior with the microversion if that's 
what you really want.


b) There were some questions about making get-me-a-network more advanced 
than the networking that is setup today (a tenant network behind a 
router). The agreement was that get-me-a-network is for the case that 
the user doesn't care, they just want networking for their instance in 
Nova. Anything that's more advanced should be pre-allocated in Neutron 
and the instance in Nova should be booted with the network/port that was 
pre-allocated in Neutron. There might be future changes/customization to 
the type of network created from the auto-allocated-topology API in 
Neutron, but that should be dealt with only in Neutron and not a concern 
of Nova.


3. Deprecating nova-network.

The rest of the session was spent discussing the (re)deprecation of 
nova-network. Given the recent couple of user surveys, it's clear that 
deployments have shifted to using Neutron.


We have some gaps in the Nova REST API but we can work each of those on 
a case-by-case basis. For example, we won't implement the 
os-virtual-interfaces API for Neutron. Today it returns a 400, that 
could maybe use a more appropriate error code, but it won't be changed 
to be a 200. And for the os-limits API which returns some compute and 
network resource quota limits info, we can microversion it to simply not 
return the network resources if you're using Neutron. Once we drop 
nova-network we'll update the API again to not return those network 
resources at all, you'll get them from Neutron (if you aren't already).


We also decided it's not worth deprecating nova-network in pieces since 
it gets messy and something might force us to add feature parity if it's 
not deprecated outright, like cells v2.


And we said it's not worth splitting it out into it's own repo since 
that has cost of it's own to maintain. If people want to fork the repo 
to keep using it, that's on them but it won't be supported by the Nova 
team once it's removed.


So given the above, Sean proposed the deprecation patch [3] which by now 
is already (eagerly) approved. Note that there isn't a timetable on the 
actual removal, it could be as early as Ocata, but we need to address 
the REST API gaps and virt driver CI testing that's using it today. So 
we'll see where we're at during the midcycle and once we get to Ocata to 
see if it's possible to 

Re: [openstack-dev] [kolla][vote] Place kolla-mesos in openstack-attic

2016-05-01 Thread Swapnil Kulkarni
On Sat, Apr 23, 2016 at 5:38 AM, Steven Dake (stdake)  wrote:
> Fellow Core Reviewers,
>
> Since many of the engineers working on the kolla-mesos repository are moving
> on to other things[1], possibly including implementing Kubernetes as an
> underlay for OpenStack containers, I propose we move the kolla-mesos
> repository into the attic[2].  This will allow folks to focus on Ryan's
> effort[3] to use Kubernetes as an underlay for Kolla containers for folks
> that want a software based underlay rather than bare metal.  I understand
> Mirantis's position that Kubernetes may have some perceived "more mojo" and
> If we are to work on an underlay, it might as well be a fresh effort based
> upon the experience of the past two failures to develop a software underlay
> for OpenStack services.  We can come back to mesos once kubernetes is
> implemented with a fresh perspective on the problem.
>
> Please vote +1 to attic the repo, or –1 not to attic the repo.  I'll leave
> the voting open until everyone has voted, or for 1 week until April 29th,
> 2016.
>
> Regards
> -steve
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2016-April/093143.html
> [2] https://github.com/openstack-attic
> [3] https://review.openstack.org/#/c/304182/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


I am +1 on this.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kubernetes] One repo vs two

2016-05-01 Thread Davanum Srinivas
Steve,

Thanks for bringing up this decision-making to the open forum.

This is a tough decision. "us vs them" i am hoping will not happen
this time around as we'll watch out for that. since there has been
talk about ansible getting split out eventually, splitting k8s into a
separate repo would be the right decision. If we do need git surgery
down the line, we can call on experts in our community :)

So +1 to a separate repo.

Thanks,
Dims


On Sun, May 1, 2016 at 4:03 PM, Steven Dake (stdake)  wrote:
> Ryan had rightly pointed out that when we made the original proposal 9am
> morning we had asked folks if they wanted to participate in a separate
> repository.
>
> I don't think a separate repository is the correct approach based upon one
> off private conversations with folks at summit.  Many people from that list
> approached me and indicated they would like to see the work integrated in
> one repository as outlined in my vote proposal email.  The reasons I heard
> were:
>
> Better integration of the community
> Better integration of the code base
> Doesn't present an us vs them mentality that one could argue happened during
> kolla-mesos
> A second repository makes k8s a second class citizen deployment architecture
> without a voice in the full deployment methodology
> Two gating methods versus one
> No going back to a unified repository while preserving git history
>
> I favor of the separate repositories I heard
>
> It presents a unified workspace for kubernetes alone
> Packaging without ansible is simpler as the ansible directory need not be
> deleted
>
> There were other complaints but not many pros.  Unfortunately I failed to
> communicate these complaints to the core team prior to the vote, so now is
> the time for fixing that.
>
> I'll leave it open to the new folks that want to do the work if they want to
> work on an offshoot repository and open us up to the possible problems
> above.
>
> If you are on this list:
>
> Ryan Hallisey
> Britt Houser
>
> mark casey
>
> Steven Dake (delta-alpha-kilo-echo)
>
> Michael Schmidt
>
> Marian Schwarz
>
> Andrew Battye
>
> Kevin Fox (kfox)
>
> Sidharth Surana (ssurana)
>
>  Michal Rostecki (mrostecki)
>
>   Swapnil Kulkarni (coolsvap)
>
>   MD NADEEM (mail2nadeem92)
>
>   Vikram Hosakote (vhosakot)
>
>   Jeff Peeler (jpeeler)
>
>   Martin Andre (mandre)
>
>   Ian Main (Slower)
>
> Hui Kang (huikang)
>
> Serguei Bezverkhi (sbezverk)
>
> Alex Polvi (polvi)
>
> Rob Mason
>
> Alicja Kwasniewska
>
> sean mooney (sean-k-mooney)
>
> Keith Byrne (kbyrne)
>
> Zdenek Janda (xdeu)
>
> Brandon Jozsa (v1k0d3n)
>
> Rajath Agasthya (rajathagasthya)
> Jinay Vora
> Hui Kang
> Davanum Srinivas
>
>
>
> Please speak up if you are in favor of a separate repository or a unified
> repository.
>
> The core reviewers will still take responsibility for determining if we
> proceed on the action of implementing kubernetes in general.
>
> Thank you
> -steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] rax-ord misbehaving in the gate

2016-05-01 Thread Steven Dake (stdake)
 Hey folks,

We really need to solve rax-ord in the gate.  For awhile it was removed because 
consistently across openstack this gate jobs were failing.  Does anyone have 
any idea why it is failing in the Kolla case?

Thanks
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kubernetes] One repo vs two

2016-05-01 Thread Steven Dake (stdake)


On 5/1/16, 4:22 PM, "Ryan Hallisey"  wrote:

>I'm voting for separate repo.  I'm not keen to adding 20 new cores to a
>stable repo.  I'd prefer it to be in a different repo while it gets
>going.  Yes, we will lose some git history *if* we vote to merge it into
>the main repo.  It's not a guarantee.

Just to clarify this point, we are not adding people to kolla-core that
want to paricipate in this effort.  Instead they would go in the
kolla-k8s-core ACL but would have effective permissions over the entire
repo but with the caveat that these invidiuals only approve commits to the
kubernetes repository.

>
>If this were to go into the main repo, I think it becomes harder for both
>ansible and kubernetes to ever come out.  I'd rather start outside since
>it's easier to move in if we choose to do that.

You have failed to convince me having these orchesetration engines in
separate repos are a good idea. :)

Regards
-steve
>
>-Ryan
>
>- Original Message -
>From: "Steven Dake (stdake)" 
>To: "OpenStack Development Mailing List (not for usage questions)"
>
>Sent: Sunday, May 1, 2016 5:03:57 PM
>Subject: [openstack-dev] [kolla][kubernetes] One repo vs two
>
>Ryan had rightly pointed out that when we made the original proposal 9am
>morning we had asked folks if they wanted to participate in a separate
>repository. 
>
>I don't think a separate repository is the correct approach based upon
>one off private conversations with folks at summit. Many people from that
>list approached me and indicated they would like to see the work
>integrated in one repository as outlined in my vote proposal email. The
>reasons I heard were:
>
>
>* Better integration of the community
>* Better integration of the code base
>* Doesn't present an us vs them mentality that one could argue
>happened during kolla-mesos
>* A second repository makes k8s a second class citizen deployment
>architecture without a voice in the full deployment methodology
>* Two gating methods versus one
>* No going back to a unified repository while preserving git history
>I favor of the separate repositories I heard
>
>
>* It presents a unified workspace for kubernetes alone
>* Packaging without ansible is simpler as the ansible directory need
>not be deleted 
>There were other complaints but not many pros. Unfortunately I failed to
>communicate these complaints to the core team prior to the vote, so now
>is the time for fixing that.
>
>I'll leave it open to the new folks that want to do the work if they want
>to work on an offshoot repository and open us up to the possible problems
>above. 
>
>If you are on this list:
>
>
>
>* Ryan Hallisey
>* Britt Houser
>
>
>* mark casey 
>
>
>* Steven Dake (delta-alpha-kilo-echo)
>
>
>* Michael Schmidt
>
>
>* Marian Schwarz
>
>
>* Andrew Battye
>
>
>* Kevin Fox (kfox)
>
>
>* Sidharth Surana (ssurana)
>
>
>* Michal Rostecki (mrostecki)
>
>
>* Swapnil Kulkarni (coolsvap)
>
>
>* MD NADEEM (mail2nadeem92)
>
>
>* Vikram Hosakote (vhosakot)
>
>
>* Jeff Peeler (jpeeler)
>
>
>* Martin Andre (mandre)
>
>
>* Ian Main (Slower)
>
>
>* Hui Kang (huikang)
>
>
>* Serguei Bezverkhi (sbezverk)
>
>
>* Alex Polvi (polvi)
>
>
>* Rob Mason 
>
>
>* Alicja Kwasniewska
>
>
>* sean mooney (sean-k-mooney)
>
>
>* Keith Byrne (kbyrne)
>
>
>* Zdenek Janda (xdeu)
>
>
>* Brandon Jozsa (v1k0d3n)
>
>
>* Rajath Agasthya (rajathagasthya)
>* Jinay Vora 
>* Hui Kang 
>* Davanum Srinivas
>
>
>Please speak up if you are in favor of a separate repository or a unified
>repository. 
>
>The core reviewers will still take responsibility for determining if we
>proceed on the action of implementing kubernetes in general.
>
>Thank you 
>-steve 
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kubernetes] One repo vs two

2016-05-01 Thread Ihor Dvoretskyi
I agree to with the described benefits of separating the repos and my +1
for a separated repo.

On Sun, May 1, 2016 at 4:22 PM, Ryan Hallisey  wrote:

> I'm voting for separate repo.  I'm not keen to adding 20 new cores to a
> stable repo.  I'd prefer it to be in a different repo while it gets going.
> Yes, we will lose some git history *if* we vote to merge it into the main
> repo.  It's not a guarantee.
>
> If this were to go into the main repo, I think it becomes harder for both
> ansible and kubernetes to ever come out.  I'd rather start outside since
> it's easier to move in if we choose to do that.
>
> -Ryan
>
> - Original Message -
> From: "Steven Dake (stdake)" 
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Sent: Sunday, May 1, 2016 5:03:57 PM
> Subject: [openstack-dev] [kolla][kubernetes] One repo vs two
>
> Ryan had rightly pointed out that when we made the original proposal 9am
> morning we had asked folks if they wanted to participate in a separate
> repository.
>
> I don't think a separate repository is the correct approach based upon one
> off private conversations with folks at summit. Many people from that list
> approached me and indicated they would like to see the work integrated in
> one repository as outlined in my vote proposal email. The reasons I heard
> were:
>
>
> * Better integration of the community
> * Better integration of the code base
> * Doesn't present an us vs them mentality that one could argue
> happened during kolla-mesos
> * A second repository makes k8s a second class citizen deployment
> architecture without a voice in the full deployment methodology
> * Two gating methods versus one
> * No going back to a unified repository while preserving git history
> I favor of the separate repositories I heard
>
>
> * It presents a unified workspace for kubernetes alone
> * Packaging without ansible is simpler as the ansible directory need
> not be deleted
> There were other complaints but not many pros. Unfortunately I failed to
> communicate these complaints to the core team prior to the vote, so now is
> the time for fixing that.
>
> I'll leave it open to the new folks that want to do the work if they want
> to work on an offshoot repository and open us up to the possible problems
> above.
>
> If you are on this list:
>
>
>
> * Ryan Hallisey
> * Britt Houser
>
>
> * mark casey
>
>
> * Steven Dake (delta-alpha-kilo-echo)
>
>
> * Michael Schmidt
>
>
> * Marian Schwarz
>
>
> * Andrew Battye
>
>
> * Kevin Fox (kfox)
>
>
> * Sidharth Surana (ssurana)
>
>
> * Michal Rostecki (mrostecki)
>
>
> * Swapnil Kulkarni (coolsvap)
>
>
> * MD NADEEM (mail2nadeem92)
>
>
> * Vikram Hosakote (vhosakot)
>
>
> * Jeff Peeler (jpeeler)
>
>
> * Martin Andre (mandre)
>
>
> * Ian Main (Slower)
>
>
> * Hui Kang (huikang)
>
>
> * Serguei Bezverkhi (sbezverk)
>
>
> * Alex Polvi (polvi)
>
>
> * Rob Mason
>
>
> * Alicja Kwasniewska
>
>
> * sean mooney (sean-k-mooney)
>
>
> * Keith Byrne (kbyrne)
>
>
> * Zdenek Janda (xdeu)
>
>
> * Brandon Jozsa (v1k0d3n)
>
>
> * Rajath Agasthya (rajathagasthya)
> * Jinay Vora
> * Hui Kang
> * Davanum Srinivas
>
>
> Please speak up if you are in favor of a separate repository or a unified
> repository.
>
> The core reviewers will still take responsibility for determining if we
> proceed on the action of implementing kubernetes in general.
>
> Thank you
> -steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best regards,

Ihor Dvoretskyi,
OpenStack Operations Engineer

---

Mirantis, Inc. (925) 808-FUEL
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Austin summit scheduler session recap

2016-05-01 Thread Matt Riedemann
On Wednesday morning Jay Pipes led a double session on the work going on 
in the Nova scheduler. The session etherpad is here [1].


Jay started off by taking us through a high-level overview of what was 
completed for the quantitative changes:


1. Resource classes

2. Resource providers

3. Online data migration of compute node RAM/CPU/disk inventory.

Then Jay talked through the in-progress quantitative changes:

1. Online data migration of allocation fields for instances.

2. Modeling of generic resource pools for things like shared storage and 
IP subnet allocation pools (for Neutron routed networks).


3. Creating a separate placement REST API endpoint for generic resource 
pools.


4. Cleanup/refactor around PCI device handling.

For the quantitative changes, we still need to migrate the PCI and NUMA 
fields.


In the second scheduler session we got more into the capabilities 
(qualitative) part of the request. For example, this could be exposing 
hypervisor capabilities from the compute node through the API or for 
scheduling.


Jay pointed out there have been several blueprints posted over time 
related to this.


We discussed just what a capability is, i.e. is a hypervisor version a 
capability since certain features are only exposed with certain 
versions?  For libvirt it isn't really. For example, you can only set 
the admin password in a guest if you have libvirt >=1.2.16. But for 
hyper-v there are features which are supported by both older and newer 
versions of the hypervisor but are generally considered better or more 
robust in the newer version. But for some things, like supporting 
hyperv-gen2 instances, we consider that 'supports-hyperv-gen2' is the 
capability. We can further break that down by value enums if we need 
more granular information about the capability.


While informative, we left the session with some unanswered next steps:

1. Where to put the inventories/allocations tables. We have three 
choices: API DB, Nova (cell) DB, or a new placement DB. Leaving them in 
the cell DB would mean aggregating data which would be a mess. Putting 
them in a new placement DB would mean a 2nd new database in a short 
number of releases for deployers to manage. So I believe we agreed to 
put them in the API DB (but if others think otherwise please speak up).


2. We talked for quite awhile about what a capability is and isn't, but 
I didn't come away with a definitive answer. This might get teased out 
in Claudiu's spec [2]. Note, however, that on Friday we agreed that as 
far as microversions are concerned, a new capability exposed in the REST 
API requires a microversion. But new enumerations for a capability, e.g. 
CPU features, do not require a new microversion bump, there are just too 
many of them.


3. I think we're in agreement on the blueprints/roadmap that Jay has put 
forth, but it's unclear if we have an owner for each blueprint. Jay and 
Chris Dent own several of these and some others are helping out (Dan 
Smith has been doing a lot of the online data migration patches), but we 
don't have owners for everything.


4. We need to close out any obsolete blueprints from the list that Jay 
had in the etherpad. As already noted, several of these are older and 
probably superseded by current work, so the team just needs flush these out.


5. Getting approval on the generic-resource-pools, 
resource-providers-allocations, standardizing capabilities (extra 
specs). The first two are pretty clear at this point, the specs just 
need to be rebased and reviewed in the next week. A lot of the code is 
already up for review. We're less clear on standardizing capabilities.


--

So the focus for the immediate future is going to have to be on 
completing the resource providers, inventory and allocation data 
migration code and generic resource pools. That's all the quantitative 
work and if we can push to get a lot of that done before the midcycle 
meetup it would be great, then we can see where we sit and discuss more 
about capabilities.


Jay - please correct or add to anything above.

[1] https://etherpad.openstack.org/p/newton-nova-scheduler
[2] https://review.openstack.org/#/c/286520/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] restarting SR-IOV/PCI Passthrough meeting

2016-05-01 Thread Moshe Levi
Small correction, the biweekly meetings will start from May 3rd. 

-Original Message-
From: Moshe Levi [mailto:mosh...@mellanox.com] 
Sent: Friday, April 29, 2016 8:13 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [nova][neutron] restarting SR-IOV/PCI Passthrough 
meeting

Hi all,

I would like to restart the SR-IOV/PCI Pass-through.
The main focus is to improve CI coverage and fixing bugs in nova side.
I updated the agenda etherpad  [1] with stuff to cover on the next meeting Mar 
3rd 2016. The meeting will be biweekly.
I like that the Mellanox/Intel and other vendors CI representative will join 
the meeting, so that we can make progress in that area.
Please review the etherpad  and add stuff that you want to talk about (remember 
the current focus is on CI and bug fixes) I updated the  sriov meeting chair 
see [2]

Thanks,
Moshe Levi.
[1] - https://etherpad.openstack.org/p/sriov_meeting_agenda
[2] - https://review.openstack.org/#/c/311472/1


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kubernetes] One repo vs two

2016-05-01 Thread Ryan Hallisey
I'm voting for separate repo.  I'm not keen to adding 20 new cores to a stable 
repo.  I'd prefer it to be in a different repo while it gets going.  Yes, we 
will lose some git history *if* we vote to merge it into the main repo.  It's 
not a guarantee.

If this were to go into the main repo, I think it becomes harder for both 
ansible and kubernetes to ever come out.  I'd rather start outside since it's 
easier to move in if we choose to do that.

-Ryan

- Original Message -
From: "Steven Dake (stdake)" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Sunday, May 1, 2016 5:03:57 PM
Subject: [openstack-dev] [kolla][kubernetes] One repo vs two

Ryan had rightly pointed out that when we made the original proposal 9am 
morning we had asked folks if they wanted to participate in a separate 
repository. 

I don't think a separate repository is the correct approach based upon one off 
private conversations with folks at summit. Many people from that list 
approached me and indicated they would like to see the work integrated in one 
repository as outlined in my vote proposal email. The reasons I heard were: 


* Better integration of the community 
* Better integration of the code base 
* Doesn't present an us vs them mentality that one could argue happened 
during kolla-mesos 
* A second repository makes k8s a second class citizen deployment 
architecture without a voice in the full deployment methodology 
* Two gating methods versus one 
* No going back to a unified repository while preserving git history 
I favor of the separate repositories I heard 


* It presents a unified workspace for kubernetes alone 
* Packaging without ansible is simpler as the ansible directory need not be 
deleted 
There were other complaints but not many pros. Unfortunately I failed to 
communicate these complaints to the core team prior to the vote, so now is the 
time for fixing that. 

I'll leave it open to the new folks that want to do the work if they want to 
work on an offshoot repository and open us up to the possible problems above. 

If you are on this list: 



* Ryan Hallisey 
* Britt Houser 


* mark casey 


* Steven Dake (delta-alpha-kilo-echo) 


* Michael Schmidt 


* Marian Schwarz 


* Andrew Battye 


* Kevin Fox (kfox) 


* Sidharth Surana (ssurana) 


* Michal Rostecki (mrostecki) 


* Swapnil Kulkarni (coolsvap) 


* MD NADEEM (mail2nadeem92) 


* Vikram Hosakote (vhosakot) 


* Jeff Peeler (jpeeler) 


* Martin Andre (mandre) 


* Ian Main (Slower) 


* Hui Kang (huikang) 


* Serguei Bezverkhi (sbezverk) 


* Alex Polvi (polvi) 


* Rob Mason 


* Alicja Kwasniewska 


* sean mooney (sean-k-mooney) 


* Keith Byrne (kbyrne) 


* Zdenek Janda (xdeu) 


* Brandon Jozsa (v1k0d3n) 


* Rajath Agasthya (rajathagasthya) 
* Jinay Vora 
* Hui Kang 
* Davanum Srinivas 


Please speak up if you are in favor of a separate repository or a unified 
repository. 

The core reviewers will still take responsibility for determining if we proceed 
on the action of implementing kubernetes in general. 

Thank you 
-steve 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] composable roles team

2016-05-01 Thread Emilien Macchi
On Sun, May 1, 2016 at 3:52 PM, Tristan Cacqueray  wrote:
> That's an exciting news, please find a couple of comments bellow.
>
> On 04/29/2016 08:27 PM, Emilien Macchi wrote:
>> Hi,
>>
>> One of the most urgent tasks we need to achieve in TripleO during
>> Newton cycle is the composable roles support.
>> So we decided to build a team that would focus on it during the next weeks.
>>
>> We started this etherpad:
>> https://etherpad.openstack.org/p/tripleo-composable-roles-work
>
> Tracking such task isn't optimal with etherpads. Why not discuss the
> design through the mailing list and use gerrit topic instead ?

++
we already have the topic:
https://review.openstack.org/#/q/topic:composable_service

For the design, I think we covered it enough during the Summit, and we
already have Keystone & Glance as part of composable roles, right now
the work is to implement the other services...

> In general, be wary of using etherpad for long term collaboration, it's
> really best use only for punctual and short lived events.
>
>>
>> So anyone can help or check where we are.
>> We're pushing / going to push a lot of patches, we would appreciate
>> some reviews and feedback.
>>
>> Also, I would like to propose to -1 every patch that is not
>> composable-role-helpful, it will help us to move forward. Our team
>> will be available to help in the patches, so we can all converge
>> together.
>
> This sounds like a stiff strategy, how are you going to deal with
> stable branch fix for example ?

I don't think the plan is to backport this feature, it sounds a lot of
work, but my opinion might be wrong.

> If a feature can't land without disruption, then why not using
> a special branch to be merged once the feature is complete ?

The problem is that during our work, some people will update the
manifests, and it will affect us, since we're copy/pasting the code
somewhere else (in puppet-tripleo), that's why we might need some
outstanding help from the team, to converge to the new model.
I know asking that is tough, but if we want to converge quickly, we
need to make the adoption accepted by everyone.
One thing we can do, is asking our reviewer team to track the patches
that will need some work, and the composable team can help in the
review process.

The composable roles is a feature that we all wait, having the help
from our contributors will really save us time.

Thanks for your feedback,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kubernetes] One repo vs two

2016-05-01 Thread Steven Dake (stdake)


From: Michał Jastrzębski >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Sunday, May 1, 2016 at 3:24 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [kolla][kubernetes] One repo vs two


While I'm not on this list, I'll speak up anyways:) on summit we agreed that we 
start from separate repo, and after kolla-k8s becomes stable, we either merge 
or not merge.

I'm for separate repo.

Keep in mind if we start with a seprate repo we cannot merge it into the main 
kolla repo without losing git history and it suffers from all the various 
problems of a separate repo.  But also core's are welcome to chime in.

Regard
-steve

On May 1, 2016 4:06 PM, "Steven Dake (stdake)" 
> wrote:
Ryan had rightly pointed out that when we made the original proposal 9am 
morning we had asked folks if they wanted to participate in a separate 
repository.

I don't think a separate repository is the correct approach based upon one off 
private conversations with folks at summit.  Many people from that list 
approached me and indicated they would like to see the work integrated in one 
repository as outlined in my vote proposal email.  The reasons I heard were:

  *   Better integration of the community
  *   Better integration of the code base
  *   Doesn't present an us vs them mentality that one could argue happened 
during kolla-mesos
  *   A second repository makes k8s a second class citizen deployment 
architecture without a voice in the full deployment methodology
  *   Two gating methods versus one
  *   No going back to a unified repository while preserving git history

I favor of the separate repositories I heard

  *   It presents a unified workspace for kubernetes alone
  *   Packaging without ansible is simpler as the ansible directory need not be 
deleted

There were other complaints but not many pros.  Unfortunately I failed to 
communicate these complaints to the core team prior to the vote, so now is the 
time for fixing that.

I'll leave it open to the new folks that want to do the work if they want to 
work on an offshoot repository and open us up to the possible problems above.

If you are on this list:


  *   Ryan Hallisey
  *   Britt Houser

  *   mark casey

  *   Steven Dake (delta-alpha-kilo-echo)

  *   Michael Schmidt

  *   Marian Schwarz

  *   Andrew Battye

  *   Kevin Fox (kfox)

  *   Sidharth Surana (ssurana)

  *Michal Rostecki (mrostecki)

  * Swapnil Kulkarni (coolsvap)

  * MD NADEEM (mail2nadeem92)

  * Vikram Hosakote (vhosakot)

  * Jeff Peeler (jpeeler)

  * Martin Andre (mandre)

  * Ian Main (Slower)

  *   Hui Kang (huikang)

  *   Serguei Bezverkhi (sbezverk)

  *   Alex Polvi (polvi)

  *   Rob Mason

  *   Alicja Kwasniewska

  *   sean mooney (sean-k-mooney)

  *   Keith Byrne (kbyrne)

  *   Zdenek Janda (xdeu)

  *   Brandon Jozsa (v1k0d3n)

  *   Rajath Agasthya (rajathagasthya)
  *   Jinay Vora
  *   Hui Kang
  *   Davanum Srinivas


Please speak up if you are in favor of a separate repository or a unified 
repository.

The core reviewers will still take responsibility for determining if we proceed 
on the action of implementing kubernetes in general.

Thank you
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][release] freeze timelines for horizon in newton

2016-05-01 Thread Amrith Kumar
From: Rob Cresswell (rcresswe) [mailto:rcres...@cisco.com]
Sent: Friday, April 29, 2016 4:28 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [horizon][release] freeze timelines for horizon in 
newton

This has been discussed (just now) in the release management plan 
(https://etherpad.openstack.org/p/newton-relmgt-plan) See point 8 under 
Communication/Governance Changes. From an immediate standpoint, the RC phase of 
this cycle will be much stricter to prevent late breakages. Going forward, 
we’re likely going to establish an earlier feature freeze too, pending 
community discussion.
[amrith] Thanks Rob, I’ve updated the etherpad with a link to this mail thread.

On a separate note, this email prompted me to scan the governance for the 
dashboard plugins and it became apparent that several have changed their 
release tags, without informing Horizon of this release cadence expectation via 
IRC, email, or our plugin feedback fishbowl. If we are to continue building a 
good plugin ecosystem, the plugins *must* communicate their expectations to us 
upstream; we do not have the time to monitor every plugin.
[amrith] I’m unsure what this expectation is. I tried to google it but came up 
empty.

I assume that you are asking that we let you know when we release a new version 
of the dashboard, is that right? Or is it something else? In any event, would 
you provide me a link to something describing this expectation and I’ll make 
sure that we try and stay true to it.

Rob


On 29 Apr 2016, at 11:26, Amrith Kumar 
> wrote:

In the Trove review of the release schedule this morning, and in the 
retrospective of the mitaka release process, one question which was raised was 
the linkage between projects like Trove and Horizon.

This came up in the specific context of the fact that projects like Trove (in 
the form of the trove-dashboard repository) late in the Mitaka cycle[3]. Late 
in the Mitaka cycle, a change in Horizon caused Trove to break very close to 
the feature freeze date.

So the question is whether we can assume that projects like Horizon will freeze 
in R-6 to ensure that (for example) Trove will freeze in R-5.

Thanks,

-amrith

[1] http://releases.openstack.org/newton/schedule.html
[2] https://review.openstack.org/#/c/311123/
[3] https://review.openstack.org/#/c/307221/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kubernetes] One repo vs two

2016-05-01 Thread Michał Jastrzębski
While I'm not on this list, I'll speak up anyways:) on summit we agreed
that we start from separate repo, and after kolla-k8s becomes stable, we
either merge or not merge.

I'm for separate repo.
On May 1, 2016 4:06 PM, "Steven Dake (stdake)"  wrote:

> Ryan had rightly pointed out that when we made the original proposal 9am
> morning we had asked folks if they wanted to participate in a separate
>  repository.
>
> I don't think a separate repository is the correct approach based upon one
> off private conversations with folks at summit.  Many people from that list
> approached me and indicated they would like to see the work integrated in
> one repository as outlined in my vote proposal email.  The reasons I heard
> were:
>
>- Better integration of the community
>- Better integration of the code base
>- Doesn't present an us vs them mentality that one could argue
>happened during kolla-mesos
>- A second repository makes k8s a second class citizen deployment
>architecture without a voice in the full deployment methodology
>- Two gating methods versus one
>- No going back to a unified repository while preserving git history
>
> I favor of the separate repositories I heard
>
>- It presents a unified workspace for kubernetes alone
>- Packaging without ansible is simpler as the ansible directory need
>not be deleted
>
> There were other complaints but not many pros.  Unfortunately I failed to
> communicate these complaints to the core team prior to the vote, so now is
> the time for fixing that.
>
> I'll leave it open to the new folks that want to do the work if they want
> to work on an offshoot repository and open us up to the possible problems
> above.
>
> If you are on this list:
>
>
>- Ryan Hallisey
>- Britt Houser
>
>
>- mark casey
>
>
>- Steven Dake (delta-alpha-kilo-echo)
>
>
>- Michael Schmidt
>
>
>- Marian Schwarz
>
>
>- Andrew Battye
>
>
>- Kevin Fox (kfox)
>
>
>- Sidharth Surana (ssurana)
>
>
>-  Michal Rostecki (mrostecki)
>
>
>-   Swapnil Kulkarni (coolsvap)
>
>
>-   MD NADEEM (mail2nadeem92)
>
>
>-   Vikram Hosakote (vhosakot)
>
>
>-   Jeff Peeler (jpeeler)
>
>
>-   Martin Andre (mandre)
>
>
>-   Ian Main (Slower)
>
>
>- Hui Kang (huikang)
>
>
>- Serguei Bezverkhi (sbezverk)
>
>
>- Alex Polvi (polvi)
>
>
>- Rob Mason
>
>
>- Alicja Kwasniewska
>
>
>- sean mooney (sean-k-mooney)
>
>
>- Keith Byrne (kbyrne)
>
>
>- Zdenek Janda (xdeu)
>
>
>- Brandon Jozsa (v1k0d3n)
>
>
>- Rajath Agasthya (rajathagasthya)
>- Jinay Vora
>- Hui Kang
>- Davanum Srinivas
>
>
>
> Please speak up if you are in favor of a separate repository or a unified
> repository.
>
> The core reviewers will still take responsibility for determining if we
> proceed on the action of implementing kubernetes in general.
>
> Thank you
> -steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote][kubernetes][infra] kolla-kubernetes repository management proposal up for vote

2016-05-01 Thread Steven Dake (stdake)


On 5/1/16, 1:23 PM, "Ryan Hallisey"  wrote:
>snip

>Trust isn't the issue here. The spec that people signed up for says
>"Create kolla-kubernetes repo".
>
>In this thread there seems to be two votes.  One is the split repo
>discussion Kolla had at summit. The second is to create kolla-kubernetes.
>I'm all for creating kolla-kubernetes, given that I wrote the spec, but
>will defer to the core team as to whether this should be in kolla or
>kolla-kubernetes. 

Regards,
-steve

Thanks for pointing out that I failed to communicate this effectively to
the core team as well as the people in the room.  The conversation
relating to splitting  the repos came off as one-off conversations with
folks who were really not in favor of a split repo after our 9am session
and I'm human - so chaulk it up to that :).  That said, your right its an
orthogonal issue and as such, I've kicked off another vote for folks that
were interested in signing up for the work as to what *THEY* want.

Regards
-steve

>
>Thanks,
>Ryan
>
>
>
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][kubernetes] One repo vs two

2016-05-01 Thread Steven Dake (stdake)
Ryan had rightly pointed out that when we made the original proposal 9am 
morning we had asked folks if they wanted to participate in a separate 
repository.

I don't think a separate repository is the correct approach based upon one off 
private conversations with folks at summit.  Many people from that list 
approached me and indicated they would like to see the work integrated in one 
repository as outlined in my vote proposal email.  The reasons I heard were:

  *   Better integration of the community
  *   Better integration of the code base
  *   Doesn't present an us vs them mentality that one could argue happened 
during kolla-mesos
  *   A second repository makes k8s a second class citizen deployment 
architecture without a voice in the full deployment methodology
  *   Two gating methods versus one
  *   No going back to a unified repository while preserving git history

I favor of the separate repositories I heard

  *   It presents a unified workspace for kubernetes alone
  *   Packaging without ansible is simpler as the ansible directory need not be 
deleted

There were other complaints but not many pros.  Unfortunately I failed to 
communicate these complaints to the core team prior to the vote, so now is the 
time for fixing that.

I'll leave it open to the new folks that want to do the work if they want to 
work on an offshoot repository and open us up to the possible problems above.

If you are on this list:


  *   Ryan Hallisey
  *   Britt Houser

  *   mark casey

  *   Steven Dake (delta-alpha-kilo-echo)

  *   Michael Schmidt

  *   Marian Schwarz

  *   Andrew Battye

  *   Kevin Fox (kfox)

  *   Sidharth Surana (ssurana)

  *Michal Rostecki (mrostecki)

  * Swapnil Kulkarni (coolsvap)

  * MD NADEEM (mail2nadeem92)

  * Vikram Hosakote (vhosakot)

  * Jeff Peeler (jpeeler)

  * Martin Andre (mandre)

  * Ian Main (Slower)

  *   Hui Kang (huikang)

  *   Serguei Bezverkhi (sbezverk)

  *   Alex Polvi (polvi)

  *   Rob Mason

  *   Alicja Kwasniewska

  *   sean mooney (sean-k-mooney)

  *   Keith Byrne (kbyrne)

  *   Zdenek Janda (xdeu)

  *   Brandon Jozsa (v1k0d3n)

  *   Rajath Agasthya (rajathagasthya)
  *   Jinay Vora
  *   Hui Kang
  *   Davanum Srinivas


Please speak up if you are in favor of a separate repository or a unified 
repository.

The core reviewers will still take responsibility for determining if we proceed 
on the action of implementing kubernetes in general.

Thank you
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] composable roles team

2016-05-01 Thread Tristan Cacqueray
That's an exciting news, please find a couple of comments bellow.

On 04/29/2016 08:27 PM, Emilien Macchi wrote:
> Hi,
> 
> One of the most urgent tasks we need to achieve in TripleO during
> Newton cycle is the composable roles support.
> So we decided to build a team that would focus on it during the next weeks.
> 
> We started this etherpad:
> https://etherpad.openstack.org/p/tripleo-composable-roles-work

Tracking such task isn't optimal with etherpads. Why not discuss the
design through the mailing list and use gerrit topic instead ?

In general, be wary of using etherpad for long term collaboration, it's
really best use only for punctual and short lived events.

> 
> So anyone can help or check where we are.
> We're pushing / going to push a lot of patches, we would appreciate
> some reviews and feedback.
> 
> Also, I would like to propose to -1 every patch that is not
> composable-role-helpful, it will help us to move forward. Our team
> will be available to help in the patches, so we can all converge
> together.

This sounds like a stiff strategy, how are you going to deal with
stable branch fix for example ?

If a feature can't land without disruption, then why not using
a special branch to be merged once the feature is complete ?


-Tristan



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote][kubernetes][infra] kolla-kubernetes repository management proposal up for vote

2016-05-01 Thread Ryan Hallisey
-1 to not having a separate kolla-kubernetes repo.

> The kubernetes codebase will be maintained in the 
> https://github.com/openstack/kolla repository under the kubernees top level 
> directory.

I don't agree here.  This needs to be its own repo.

> I feel we made a couple errors with the creation of Kolla-mesos that I feel 
> needs correction. Our first error the kolla-mesos-core team made a lack of a 
> diversely affiliated team membership developing the code base. The above list 
> has 
> significant diversity. 

I don't think diversity is a requirement for an alternative deployment tool to 
be created in the Kolla community. It would be great, I agree, but sometimes 
communities are diverse and sometimes they arn't.  Kolla can't require that. 
It's not in kolla's mission statement.  

> The second error is that the repository was split in the first place. 
> This resulted in a separate ABI to the containers being implemented which was 
> a sore spot for me personally. We did our best to build both 
> sides of the bridge here, but this time I'd like the bridge between these two 
> interests and set of individuals to be fully built before beginning. 

Kolla cores are there to guard the gate and we can only recommend a way to 
build a deployment tool around the Kolla containers. The creation of the 
alternative ABI was a Kolla-mesos choice. Therefore, next time the kolla cores, 
myself included, need to be more active in the discussion around the ABI in 
other deployment tools trying to consume kolla containers.  I agree that it 
would be great to unify around one consumption method of the containers, but 
that's not a guarantee.

> As such, I'd ask the existing kolla-core team to trust my judgement on this 
> point and 
> roll with it. We can always change the structure later if this model doesn't 
> work out as I expect it will, but if we started with split repos and a 
> changed structure to begin with, we can't go back to a non-split repo as the 
> action is 
> irreversible according to dims. 
> I know this proposal may seem uncomfortable for our existing kolla-core team. 
> I can assure you based upon twenty years of open source participation this 
> will result in a better outcome. We had talked about splitting the 
> repositories, 
> and our plan around that is to punt until such action is absolutely 
> necessary. Keeping things in one repository can always be split later, but a 
> premature split can never be undone without losing git commit history. 

I don't agree. A lot of people in the community liked the split of 
kolla-kubernetes until the project reaches a point where Kolla cores need to 
revisit the repo split action.  Then, Kolla cores vote to pull kolla-kubernetes 
in or vote to split out kolla-ansible. Ansible still hasn't been split out and 
a unified CLI will guarantee both kolla-ansible and kolla-kubernetes will never 
be split. That's not a punt. We may lose some git history, but adding 30 new 
cores to kolla will do more damage to the project.

It's not a matter of trusting judgment. Everyone kolla core has +1 or -1. We'll 
see if other cores agree.

> We will mark the kubernetes orchestration in Kolla as experimental until 
> existing feature parity is achieved in the kolla CLI tool. After the initial 
> implementation is ready, we will mark it as ready for evaluation . At the 
> conclusion 
> of Newton, assuming the implementation works well, we will mark the 
> implementation as "production ready", just as our current Ansible 
> orchestrated implementation is recorded. 
>
> ** All CLI features of the kolla-ansible shell script to be implemented for 
> "ready-for-evaluation" stage. ** 
>
> This includes the following CLI operations where they make sense: 
>
>
>1. Prechecks 
>2. mariadb_recvoery 
>3. Deploy 
>4. Post-deploy 
>5. Pull 
>6. Upgrade 
>7. Reconfgiure 
>8. Certificates 
> As part of this change, I will be submitting a change to rename kolla-ansible 
> to kolla with appropriate documentation changes. This one shell script will 
> in the future will read from globals.yml a yaml variable which is 
> "orchestratoin_engine" which may be either ansible or kubernetes. In this 
> way, the terminology I strongly dislike "first class citizen" will be removed 
> from our lexicon in the Kolla repository. Instead of first class/second class 
> citizen, we will have all orchestration systems as "first class citizens" 
> with varying levels of maturity. 

There will always be tools out there that choose not to join the Kolla repo.  
Pulling in a deployment tool that wants to use Kolla's containers is a mistake. 
 Everyone has the same level of access to kolla's containers.  In my mind, one 
repo doesn't send this message.

> Please vote +1 if in favor, -1 if not in favor. 7 votes will trigger early 
> closing of the vote and the creation of the kubernetes directory with a 
> README.rst. The voting will remain open for 1 week until May 6th unless a 

Re: [openstack-dev] [kolla][vote][kubernetes][infra] kolla-kubernetes repository management proposal up for vote

2016-05-01 Thread Kwasniewska, Alicja
+1

Cheers,
Alicja


-Original Message-
From: Hui Kang [mailto:hkang.sun...@gmail.com] 
Sent: Sunday, May 1, 2016 12:40 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [kolla][vote][kubernetes][infra] kolla-kubernetes 
repository management proposal up for vote

+1

- Hui

On Sat, Apr 30, 2016 at 10:50 AM, Steven Dake (stdake)  wrote:
> Fellow core reviewers,
>
> We had a fantastic turnout at our fishbowl kubernetes as an underlay 
> for Kolla session.  The etherpad documents the folks interested and 
> discussion at summit[1].
>
> This proposal is mostly based upon a combination of several 
> discussions at open design meetings coupled with the kubernetes underlay 
> discussion.
>
> The proposal (and what we are voting on) is as follows:
>
> Folks in the following list will be added to a kolla-k8s-core group.
>
>  This kolla-k8s-core group will be responsible for code reviews and 
> code submissions to the kolla repository for the /kubernetes top level 
> directory.
> Individuals in kolla-k8s-core that consistently approve (+2) or 
> disapprove with a (-2) votes to TLD directories other then kubernetes 
> will be handled on a case by case basis with several "training 
> warnings" followed by removal of the kolla-k8s-core group.  The 
> kolla-k8s-core group will be added as a subgroup of the kolla-core 
> reviewer team, which means they in effect have all of the ACL access 
> as the existing kolla repository.  I think it is better in this case 
> to trust these individuals to the right thing and only approve changes 
> for the kubernetes directory until proposed for the kolla-core 
> reviewer group where they can gate changes to any part of the repository.
>
> Britt Houser
>
> mark casey
>
> Steven Dake (delta-alpha-kilo-echo)
>
> Michael Schmidt
>
> Marian Schwarz
>
> Andrew Battye
>
> Kevin Fox (kfox)
>
> Sidharth Surana (ssurana)
>
>  Michal Rostecki (mrostecki)
>
>   Swapnil Kulkarni (coolsvap)
>
>   MD NADEEM (mail2nadeem92)
>
>   Vikram Hosakote (vhosakot)
>
>   Jeff Peeler (jpeeler)
>
>   Martin Andre (mandre)
>
>   Ian Main (Slower)
>
> Hui Kang (huikang)
>
> Serguei Bezverkhi (sbezverk)
>
> Alex Polvi (polvi)
>
> Rob Mason
>
> Alicja Kwasniewska
>
> sean mooney (sean-k-mooney)
>
> Keith Byrne (kbyrne)
>
> Zdenek Janda (xdeu)
>
> Brandon Jozsa (v1k0d3n)
>
> Rajath Agasthya (rajathagasthya)
>
>
> If you already are in the kolla-core review team, you won't be added 
> to the kolla-k8s-core team as you will already have the necessary ACLs 
> to do the job.  If you feel you would like to join this initial 
> bootstrapping process, please add your name to the etherpad in [1].
>
> After 8 weeks (July 15h), folks that have not been actively reviewing 
> or committing code will be removed from the kolla-k8s-core group.  We 
> will use the governance repository metrics for team size [2] which is 
> either 30 reviews over 6 months (in this case, 10 reviews), or 6 
> commits over 6 months (in this case 2 commits) to the repository.  
> Folks that don't meet the qualifications are still welcome to commit 
> to the repository and contribute code or documentation but will lose approval 
> rights on patches.
>
> The kubernetes codebase will be maintained in the 
> https://github.com/openstack/kolla repository under the kubernees top 
> level directory.  Contributors that become active in the kolla 
> repository itself will be proposed over time to the kolla-core group.  
> Only core-kolla members will be permitted to participate in policy 
> decisions and voting thereof, so there is some minimal extra 
> responsibility involved in joining the kolla-core ACL team for those 
> folks wanting to move into the kolla core team over time.  The goal 
> will be to over time entirely remove the kolla-k8s-core team and make one 
> core reviewer team in the kolla-core ACL.
>
> Members in the kolla-k8s-core group will have the ability to +2 or –2 
> any change to the main kolla repository via ACLs, however, I propose 
> we trust these folks to only +2/-2 changes related to the kubernetes 
> directory in the kolla repository and remove folks that consistently break 
> this agreement.
> Initial errors as folks learn the system will be tolerated and commits 
> reverted as makes sense.
>
> I feel we made a couple errors with the creation of Kolla-mesos that I 
> feel needs correction.  Our first error the kolla-mesos-core team made 
> a lack of a diversely affiliated team membership developing the code 
> base.  The above list has significant diversity.  The second error is 
> that the repository was split in the first place.  This resulted in a 
> separate ABI to the containers being implemented which was a sore spot 
> for me personally.  We did our best to build both sides of the bridge 
> here, but this time I'd like the bridge between these two interests 
> and set of individuals to be fully built before 

Re: [openstack-dev] [glance] Interest in contributing to OpenStack

2016-05-01 Thread Djimeli Konrad
Hello,

With respect to my previous mail. I would just like to add that I am
aware the GSoC2016 and Outreachy2016 application periods are over and
I have always been interested in making contribution to these
projects, regardless of GSoC or Outreachy. I have also successfully
gone through the OpenStack tutorial on how to submit a first patch and
I hope to start working on a bug soon.

On 1 May 2016 at 10:28, Djimeli Konrad  wrote:
> Hello,
>
> My name is Djimeli Konrad a second year computer science student from
> the University of Buea Cameroon, Africa. I am GSoC 2015 participant,
> and I have also worked on  some open-source projects on github
> (http://github.com/djkonro) and sourceforge
> (https://sourceforge.net/u/konrado/profile/). I am very passionate
> about cloud development, distributed systems, virtualization and I
> would like to contribute to OpenStack.
>
> I have to gone through the OpenStack GSoC2016/Outreachy2016 projects
> (https://wiki.openstack.org/wiki/Internship_ideas ) and I am
> interested the in working on "Glance - Extended support for requests
> library" and "Glance - Develop a python based GLARE (GLance Artifacts
> REpository) client library and shell API". I would like to get some
> detail regarding the projects. Is this a priority project?, what is
> the expected outcome? and what are some starting points?.
>
> I am proficient with C, C++ and Python, and I have successfully build
> and setup OpenStack, using devstack.
>
> Thanks
> Konrad
> https://www.linkedin.com/in/konrad-djimeli-69809b97

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Interest in contributing to OpenStack

2016-05-01 Thread Djimeli Konrad
Hello,

My name is Djimeli Konrad a second year computer science student from
the University of Buea Cameroon, Africa. I am GSoC 2015 participant,
and I have also worked on  some open-source projects on github
(http://github.com/djkonro) and sourceforge
(https://sourceforge.net/u/konrado/profile/). I am very passionate
about cloud development, distributed systems, virtualization and I
would like to contribute to OpenStack.

I have to gone through the OpenStack GSoC2016/Outreachy2016 projects
(https://wiki.openstack.org/wiki/Internship_ideas ) and I am
interested the in working on "Glance - Extended support for requests
library" and "Glance - Develop a python based GLARE (GLance Artifacts
REpository) client library and shell API". I would like to get some
detail regarding the projects. Is this a priority project?, what is
the expected outcome? and what are some starting points?.

I am proficient with C, C++ and Python, and I have successfully build
and setup OpenStack, using devstack.

Thanks
Konrad
https://www.linkedin.com/in/konrad-djimeli-69809b97

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev