Re: [openstack-dev] Requirements for becoming approved official project

2016-04-30 Thread Shinobu Kinjo
Hi Lana,

Thank you for your advice regarding to documentation.
I will email the list.

Cheers,
Shinobu

On Sun, May 1, 2016 at 12:20 PM, Lana Brindley
 wrote:
> Hi Shinobu,
>
> I can help you with the documentation piece, at least.
>
> The documentation team are working on a new method to help projects document 
> their installation guides in their own repo, with publishing to 
> docs.openstack.org. This is explicitly to help projects meet the project 
> navigator requirements.
>
> Before Newton, we will have the infrastructure up, and hope to also have a 
> template and some other guides for you to help you do this. If you have 
> questions about getting started on this, please email the docs mailing list: 
> openstack-d...@lists.openstack.org
>
> Thanks,
> Lana
>
> On 29/04/16 22:40, Shinobu Kinjo wrote:
>> Hi Tom,
>>
>> First sorry for bothering you -;
>>
>> We are trying to make the tricircle project [1] one of the opnestack
>> official projects. And we are referring to project team guide to make
>> sure what are requirements. [2]. Reading this guide, what we need to
>> consider right now is open development, I think (but not 100% sure).
>> [3]
>>
>> We have a blueprint. [4] We also have git repositories in
>> openstack.org and github.com. [5] [6]
>> There is a few bugs filed. [7] There are few contributors.
>>
>> What we don't have now is official documentation which is supposed to
>> be located at openstack.org. [8] This is because we are not officially
>> approved project. [9] This situation is now huge bottleneck for our
>> project.
>>
>> There were some advices from one of developers, which pointed to
>> guides. [1] [10]
>> If you could provide some suggestions, advices or no matter what are
>> really necessary for becoming officially approved project with us, it
>> would be MUCH appreciated.
>>
>> [1] https://wiki.openstack.org/wiki/Tricircle
>> [2] http://docs.openstack.org/project-team-guide/
>> [3] http://docs.openstack.org/project-team-guide/open-development.html
>> [4]https://launchpad.net/tricircle
>> [5] https://git.openstack.org/openstack/tricircle
>> [6] https://github.com/openstack/tricircle/
>> [7] http://bugs.launchpad.net/tricircle
>> [8] http://docs.openstack.org/developer/tricircle
>> [9] 
>> http://docs.openstack.org/infra/manual/creators.html#add-link-to-your-developer-documentation
>> [10] http://governance.openstack.org/reference/new-projects-requirements.html
>>
>> Thanks for your great help in advance!
>>
>> Cheers,
>> Shinobu
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> --
> Lana Brindley
> Technical Writer
> Rackspace Cloud Builders Australia
> http://lanabrindley.com
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Email:
shin...@linux.com
GitHub:
shinobu-x
Blog:
Life with Distributed Computational System based on OpenSource

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Requirements for becoming approved official project

2016-04-30 Thread Lana Brindley
Hi Shinobu,

I can help you with the documentation piece, at least.

The documentation team are working on a new method to help projects document 
their installation guides in their own repo, with publishing to 
docs.openstack.org. This is explicitly to help projects meet the project 
navigator requirements. 

Before Newton, we will have the infrastructure up, and hope to also have a 
template and some other guides for you to help you do this. If you have 
questions about getting started on this, please email the docs mailing list: 
openstack-d...@lists.openstack.org

Thanks,
Lana

On 29/04/16 22:40, Shinobu Kinjo wrote:
> Hi Tom,
> 
> First sorry for bothering you -;
> 
> We are trying to make the tricircle project [1] one of the opnestack
> official projects. And we are referring to project team guide to make
> sure what are requirements. [2]. Reading this guide, what we need to
> consider right now is open development, I think (but not 100% sure).
> [3]
> 
> We have a blueprint. [4] We also have git repositories in
> openstack.org and github.com. [5] [6]
> There is a few bugs filed. [7] There are few contributors.
> 
> What we don't have now is official documentation which is supposed to
> be located at openstack.org. [8] This is because we are not officially
> approved project. [9] This situation is now huge bottleneck for our
> project.
> 
> There were some advices from one of developers, which pointed to
> guides. [1] [10]
> If you could provide some suggestions, advices or no matter what are
> really necessary for becoming officially approved project with us, it
> would be MUCH appreciated.
> 
> [1] https://wiki.openstack.org/wiki/Tricircle
> [2] http://docs.openstack.org/project-team-guide/
> [3] http://docs.openstack.org/project-team-guide/open-development.html
> [4]https://launchpad.net/tricircle
> [5] https://git.openstack.org/openstack/tricircle
> [6] https://github.com/openstack/tricircle/
> [7] http://bugs.launchpad.net/tricircle
> [8] http://docs.openstack.org/developer/tricircle
> [9] 
> http://docs.openstack.org/infra/manual/creators.html#add-link-to-your-developer-documentation
> [10] http://governance.openstack.org/reference/new-projects-requirements.html
> 
> Thanks for your great help in advance!
> 
> Cheers,
> Shinobu
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Kolla][compute-kit plugins] Nova and neutron plugin bps

2016-04-30 Thread Hui Kang
Hi, Kolla core reviewers:
In one of the kolla work session, the discussion came across the
plugins for compute kit [1]. There are two bps I think that are
relevant to this topic regarding nova and neutron: nova-docker [3] and
neutron-OVN [2]. Could any kolla core help to take a look at these two
bps and provide some feedback? Thanks.

- Hui

[1] https://etherpad.openstack.org/p/kolla-newton-summit-plugin-planning
[2] https://blueprints.launchpad.net/kolla/+spec/ovn-controller-neutron
[3] https://blueprints.launchpad.net/kolla/+spec/nova-docker-container

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] composable roles team

2016-04-30 Thread Ryan Brady
On Fri, Apr 29, 2016 at 4:27 PM, Emilien Macchi  wrote:

> Hi,
>
> One of the most urgent tasks we need to achieve in TripleO during
> Newton cycle is the composable roles support.
> So we decided to build a team that would focus on it during the next weeks.
>
> We started this etherpad:
> https://etherpad.openstack.org/p/tripleo-composable-roles-work
>
> So anyone can help or check where we are.
> We're pushing / going to push a lot of patches, we would appreciate
> some reviews and feedback.
>
> Also, I would like to propose to -1 every patch that is not
> composable-role-helpful, it will help us to move forward. Our team
> will be available to help in the patches, so we can all converge
> together.
>

-1 everything else is too heavy handed.


>
> Any feedback is welcome, thanks.
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
- Ryan

Ryan Brady
rbr...@redhat.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][tc] Neutron stadium evolution from Austin

2016-04-30 Thread Brandon Logan
I have to agree with Doug.  This proposal isn't saying you can't have a
neutron plugin/driver, it's just that it won't be under governance of
neutron.  As long as the plugin and driver interfaces are there and
relatively stable, you'll be able to use it.  Also, if I understood
correctly, you'll also be able to continue having a repository for your
plugin/driver in the openstack namespace.  Combine that with everything
else Doug said, it seems fairly logical unless I've missed something.

Thanks,
Brandon

On Sat, 2016-04-30 at 14:42 -0600, Doug Wiegley wrote:
> 
> > On Apr 30, 2016, at 1:24 PM, Fawad Khaliq 
> > wrote:
> > 
> > Hi folks,
> > 
> > Hope everyone had a great summit in Austin and got back safe! :)
> > 
> > At the design summit, we had a Neutron stadium evolution session,
> > which needs your immediate attention as it will impact many
> > stakeholders of Neutron.
> > 
> > To summarize for everyone, our Neutron leadership made the following
> > proposal for the “greater-good” of Neutron to improve and reduce
> > burden on the Neutron PTL and core team to avoid managing more
> > Neutron drivers:
> > 
> > Quoting the etherpad [1]
> > 
> > "No request for inclusion are accepted for projects focussed solely
> > on implementations and/or API extensions to non-open solutions.”
> 
> 
> Let’s be clear about official openstack projects versus in the
> ecosystem versus whatever, which is defined by the TC, not
> neutron: 
> https://governance.openstack.org/reference/new-projects-requirements.html
> 
> > 
> > To summarize for everyone what this means is that all Neutron
> > drivers, which implement non open source networking backends are
> > instantly out of the Neutron stadium and are marked as
> > "unofficial/unsupported/remotely affiliated" and rest are capable of
> > being tagged as "supported/official”.
> > 
> 
> 
> So, before we throw around statements like “supported” vs
> “unsupported”, let’s take a look at what the stadium governance change
> really entails:
> 
> 
> - The neutron core team won’t review/merge/maintain patches for your
> plugin/driver. In many cases, this was already true.
> - The neutron release team won’t handle tagging your releases. In many
> cases, this was already true.
> - The neutron PTL is no longer involved in your repository’s
> governance. In many cases, this was effectively already true.
> 
> 
> It doesn’t mean it isn’t a valid project that supports neutron
> interfaces.
> 
> 
> In or out of the stadium, all plugins have these challenges:
> 
> 
> - If you’re not using a stable interface, you’ll break a lot.
> - If you are using a stable interface, you’ll still break some
> (standard rot).
> - Vendors will need to support and test their own code.
> 
> 
> Every time this comes up, people get upset that neutron is closing its
> doors, or somehow invalidating all the existing plugins. Let’s review:
> 
> 
> - The neutron api and plugin interfaces are not going away.
> - There is ongoing work to libify more interfaces, for the sake of
> plugins/drivers.
> - There is a strong push for more documentation to make integrating
> better.
> - Non-stadium projects still have access to their infra repos and CI
> resources.
> 
> 
> Armando’s proposal was about recognizing reality, not some huge change
> in how things actually work. What is the point of having a project
> governed by Neutron that isn’t doing anything but consuming neutron
> interfaces, and is otherwise uninvolved? How can you expect neutron to
> vouch for those? What is your proposal?
> 
> 
> Thanks,
> doug
> 
> 
> 
> > 
> > This eliminates all commercial Neutron drivers developed for many
> > service providers and enterprises who have deployed OpenStack
> > successfully with these drivers. It’s unclear how the OpenStack
> > Foundation will communicate its stance with all the users but
> > clearly this is a huge set back for OpenStack and Neutron. Neutron
> > will essentially become closed to all existing, non-open drivers,
> > even if these drivers have been compliant with Neutron API for years
> > and users have them deployed in production, forcing users to
> > re-evaluate their options.
> > 
> > Furthermore, this proposal will erode confidence in Neutron and
> > OpenStack, and destroy much of the value that the community has
> > worked so hard to build over the years.
> > 
> > As a representative and member of the OpenStack community and
> > maintainer of a Neutron driver (since Grizzly), I am deeply
> > disappointed and disagree with this statement [2]. Tossing out all
> > the non-open solutions is not in the best interest of the end user
> > companies that have built working OpenStack clusters. This proposal
> > will lead OpenStack end users who deployed different drivers to
> > think twice about OpenStack communities’ commitment to deliver
> > solutions they need. Furthermore, this proposal punishes OpenStack
> > companies who developed commercial backend drivers to help end users
> > bring up 

Re: [openstack-dev] [kolla][vote][kubernetes][infra] kolla-kubernetes repository management proposal up for vote

2016-04-30 Thread Hui Kang
+1

- Hui

On Sat, Apr 30, 2016 at 10:50 AM, Steven Dake (stdake)  wrote:
> Fellow core reviewers,
>
> We had a fantastic turnout at our fishbowl kubernetes as an underlay for
> Kolla session.  The etherpad documents the folks interested and discussion
> at summit[1].
>
> This proposal is mostly based upon a combination of several discussions at
> open design meetings coupled with the kubernetes underlay discussion.
>
> The proposal (and what we are voting on) is as follows:
>
> Folks in the following list will be added to a kolla-k8s-core group.
>
>  This kolla-k8s-core group will be responsible for code reviews and code
> submissions to the kolla repository for the /kubernetes top level directory.
> Individuals in kolla-k8s-core that consistently approve (+2) or disapprove
> with a (-2) votes to TLD directories other then kubernetes will be handled
> on a case by case basis with several "training warnings" followed by removal
> of the kolla-k8s-core group.  The kolla-k8s-core group will be added as a
> subgroup of the kolla-core reviewer team, which means they in effect have
> all of the ACL access as the existing kolla repository.  I think it is
> better in this case to trust these individuals to the right thing and only
> approve changes for the kubernetes directory until proposed for the
> kolla-core reviewer group where they can gate changes to any part of the
> repository.
>
> Britt Houser
>
> mark casey
>
> Steven Dake (delta-alpha-kilo-echo)
>
> Michael Schmidt
>
> Marian Schwarz
>
> Andrew Battye
>
> Kevin Fox (kfox)
>
> Sidharth Surana (ssurana)
>
>  Michal Rostecki (mrostecki)
>
>   Swapnil Kulkarni (coolsvap)
>
>   MD NADEEM (mail2nadeem92)
>
>   Vikram Hosakote (vhosakot)
>
>   Jeff Peeler (jpeeler)
>
>   Martin Andre (mandre)
>
>   Ian Main (Slower)
>
> Hui Kang (huikang)
>
> Serguei Bezverkhi (sbezverk)
>
> Alex Polvi (polvi)
>
> Rob Mason
>
> Alicja Kwasniewska
>
> sean mooney (sean-k-mooney)
>
> Keith Byrne (kbyrne)
>
> Zdenek Janda (xdeu)
>
> Brandon Jozsa (v1k0d3n)
>
> Rajath Agasthya (rajathagasthya)
>
>
> If you already are in the kolla-core review team, you won't be added to the
> kolla-k8s-core team as you will already have the necessary ACLs to do the
> job.  If you feel you would like to join this initial bootstrapping process,
> please add your name to the etherpad in [1].
>
> After 8 weeks (July 15h), folks that have not been actively reviewing or
> committing code will be removed from the kolla-k8s-core group.  We will use
> the governance repository metrics for team size [2] which is either 30
> reviews over 6 months (in this case, 10 reviews), or 6 commits over 6 months
> (in this case 2 commits) to the repository.  Folks that don't meet the
> qualifications are still welcome to commit to the repository and contribute
> code or documentation but will lose approval rights on patches.
>
> The kubernetes codebase will be maintained in the
> https://github.com/openstack/kolla repository under the kubernees top level
> directory.  Contributors that become active in the kolla repository itself
> will be proposed over time to the kolla-core group.  Only core-kolla members
> will be permitted to participate in policy decisions and voting thereof, so
> there is some minimal extra responsibility involved in joining the
> kolla-core ACL team for those folks wanting to move into the kolla core team
> over time.  The goal will be to over time entirely remove the kolla-k8s-core
> team and make one core reviewer team in the kolla-core ACL.
>
> Members in the kolla-k8s-core group will have the ability to +2 or –2 any
> change to the main kolla repository via ACLs, however, I propose we trust
> these folks to only +2/-2 changes related to the kubernetes directory in the
> kolla repository and remove folks that consistently break this agreement.
> Initial errors as folks learn the system will be tolerated and commits
> reverted as makes sense.
>
> I feel we made a couple errors with the creation of Kolla-mesos that I feel
> needs correction.  Our first error the kolla-mesos-core team made a lack of
> a diversely affiliated team membership developing the code base.  The above
> list has significant diversity.  The second error is that the repository was
> split in the first place.  This resulted in a separate ABI to the containers
> being implemented which was a sore spot for me personally.  We did our best
> to build both sides of the bridge here, but this time I'd like the bridge
> between these two interests and set of individuals to be fully built before
> beginning.  As such, I'd ask the existing kolla-core team to trust my
> judgement on this point and roll with it.  We can always change the
> structure later if this model doesn't work out as I expect it will, but if
> we started with split repos and a changed structure to begin with, we can't
> go back to a non-split repo as the action is irreversible according to dims.
>
> I know this proposal may seem 

Re: [openstack-dev] [neutron][tc] Neutron stadium evolution from Austin

2016-04-30 Thread Doug Wiegley

> On Apr 30, 2016, at 1:24 PM, Fawad Khaliq  wrote:
> 
> Hi folks,
> 
> Hope everyone had a great summit in Austin and got back safe! :)
> 
> At the design summit, we had a Neutron stadium evolution session, which needs 
> your immediate attention as it will impact many stakeholders of Neutron.
> 
> To summarize for everyone, our Neutron leadership made the following proposal 
> for the “greater-good” of Neutron to improve and reduce burden on the Neutron 
> PTL and core team to avoid managing more Neutron drivers:
> 
> Quoting the etherpad [1]
> 
> "No request for inclusion are accepted for projects focussed solely on 
> implementations and/or API extensions to non-open solutions.”

Let’s be clear about official openstack projects versus in the ecosystem versus 
whatever, which is defined by the TC, not neutron: 
https://governance.openstack.org/reference/new-projects-requirements.html 

> 
> To summarize for everyone what this means is that all Neutron drivers, which 
> implement non open source networking backends are instantly out of the 
> Neutron stadium and are marked as "unofficial/unsupported/remotely 
> affiliated" and rest are capable of being tagged as "supported/official”.

So, before we throw around statements like “supported” vs “unsupported”, let’s 
take a look at what the stadium governance change really entails:

- The neutron core team won’t review/merge/maintain patches for your 
plugin/driver. In many cases, this was already true.
- The neutron release team won’t handle tagging your releases. In many cases, 
this was already true.
- The neutron PTL is no longer involved in your repository’s governance. In 
many cases, this was effectively already true.

It doesn’t mean it isn’t a valid project that supports neutron interfaces.

In or out of the stadium, all plugins have these challenges:

- If you’re not using a stable interface, you’ll break a lot.
- If you are using a stable interface, you’ll still break some (standard rot).
- Vendors will need to support and test their own code.

Every time this comes up, people get upset that neutron is closing its doors, 
or somehow invalidating all the existing plugins. Let’s review:

- The neutron api and plugin interfaces are not going away.
- There is ongoing work to libify more interfaces, for the sake of 
plugins/drivers.
- There is a strong push for more documentation to make integrating better.
- Non-stadium projects still have access to their infra repos and CI resources.

Armando’s proposal was about recognizing reality, not some huge change in how 
things actually work. What is the point of having a project governed by Neutron 
that isn’t doing anything but consuming neutron interfaces, and is otherwise 
uninvolved? How can you expect neutron to vouch for those? What is your 
proposal?

Thanks,
doug


> 
> This eliminates all commercial Neutron drivers developed for many service 
> providers and enterprises who have deployed OpenStack successfully with these 
> drivers. It’s unclear how the OpenStack Foundation will communicate its 
> stance with all the users but clearly this is a huge set back for OpenStack 
> and Neutron. Neutron will essentially become closed to all existing, non-open 
> drivers, even if these drivers have been compliant with Neutron API for years 
> and users have them deployed in production, forcing users to re-evaluate 
> their options.
> 
> Furthermore, this proposal will erode confidence in Neutron and OpenStack, 
> and destroy much of the value that the community has worked so hard to build 
> over the years.
> 
> As a representative and member of the OpenStack community and maintainer of a 
> Neutron driver (since Grizzly), I am deeply disappointed and disagree with 
> this statement [2]. Tossing out all the non-open solutions is not in the best 
> interest of the end user companies that have built working OpenStack 
> clusters. This proposal will lead OpenStack end users who deployed different 
> drivers to think twice about OpenStack communities’ commitment to deliver 
> solutions they need. Furthermore, this proposal punishes OpenStack companies 
> who developed commercial backend drivers to help end users bring up OpenStack 
> clouds.
> 
> Also, we have to realize that this proposal divides the community rather than 
> unifying it. If it proceeds, it seems all OpenStack projects should follow 
> for consistency. For example, this should apply to Nova which means HyperV 
> and vShphere can't be part of Nova, PLUMgrid can't be part of Kuryr, and ABC 
> company cannot have a driver/plugin for a XYZ project.
> 
> Another thing to note is, for operators, the benefit is that the flexibility 
> up until now has allowed them to embark on successful OpenStack deployments. 
> For those operators, yanking out support they’ve come to depend on makes 
> things worse. While certain team members may prefer only open-source 

Re: [openstack-dev] [Fuel][MySQL][DLM][Oslo][DB][Trove][Galera][operators] Multi-master writes look OK, OCF RA and more things

2016-04-30 Thread Mike Bayer



On 04/30/2016 10:50 AM, Clint Byrum wrote:

Excerpts from Roman Podoliaka's message of 2016-04-29 12:04:49 -0700:




I'm curious why you think setting wsrep_sync_wait=1 wouldn't help.

The exact example appears in the Galera documentation:

http://galeracluster.com/documentation-webpages/mysqlwsrepoptions.html#wsrep-sync-wait

The moment you say 'SET SESSION wsrep_sync_wait=1', the behavior should
prevent the list problem you see, and it should not matter that it is
a separate session, as that is the entire point of the variable:



we prefer to keep it off and just point applications at a single node 
using master/passive/passive in HAProxy, so that we don't have the 
unnecessary performance hit of waiting for all transactions to 
propagate; we just stick on one node at a time.   We've fixed a lot of 
issues in our config in ensuring that HAProxy definitely keeps all 
clients on exactly one Galera node at a time.




"When you enable this parameter, the node triggers causality checks in
response to certain types of queries. During the check, the node blocks
new queries while the database server catches up with all updates made
in the cluster to the point where the check was begun. Once it reaches
this point, the node executes the original query."

In the active/passive case where you never use the passive node as a
read slave, one could actually set wsrep_sync_wait=1 globally. This will
cause a ton of lag while new queries happen on the new active and old
transactions are still being applied, but that's exactly what you want,
so that when you fail over, nothing proceeds until all writes from the
original active node are applied and available on the new active node.
It would help if your failover technology actually _breaks_ connections
to a presumed dead node, so writes stop happening on the old one.


If HAProxy is failing over from the master, which is no longer 
reachable, to another passive node, which is reachable, that means that 
master is partitioned and will leave the Galera primary component.   It 
also means all current database connections are going to be bounced off, 
which will cause errors for those clients either in the middle of an 
operation, or if a pooled connection is reused before it is known that 
the connection has been reset.  So failover is usually not an error-free 
situation in any case from a database client perspective and retry 
schemes are always going to be needed.


Additionally, the purpose of the enginefacade [1] is to allow Openstack 
applications to fix their often incorrectly written database access 
logic such that in many (most?) cases, a single logical operation is no 
longer unnecessarily split among multiple transactions when possible. 
I know that this is not always feasible in the case where multiple web 
requests are coordinating, however.


That leaves only the very infrequent scenario of, the master has 
finished sending a write set off, the passives haven't finished 
committing that write set, the master goes down and HAProxy fails over 
to one of the passives, and the application that just happens to also be 
connecting fresh onto that new passive node in order to perform the next 
operation that relies upon the previously committed data so it does not 
see a database error, and instead runs straight onto the node where the 
committed data it's expecting hasn't arrived yet.   I can't make the 
judgment for all applications if this scenario can't be handled like any 
other transient error that occurs during a failover situation, however 
if there is such a case, then IMO the wsrep_sync_wait (formerly known as 
wsrep_causal_reads) may be used on a per-transaction basis for that very 
critical, not-retryable-even-during-failover operation.  Allowing this 
variable to be set for the scope of a transaction and reset afterwards, 
and only when talking to Galera, is something we've planned to work into 
the enginefacade as well as an declarative transaction attribute that 
would be a pass-through on other systems.


[1] 
https://specs.openstack.org/openstack/oslo-specs/specs/kilo/make-enginefacade-a-facade.html





Also, If you thrash back and forth a bit, that could cause your app to
virtually freeze, but HAProxy and most other failover technologies allow
tuning timings so that you can stay off of a passive server long enough
to calm it down and fail more gracefully to it.

Anyway, this is why sometimes I do wonder if we'd be better off just
using MySQL with DRBD and good old pacemaker.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

[openstack-dev] [neutron][tc] Neutron stadium evolution from Austin

2016-04-30 Thread Fawad Khaliq
Hi folks,

Hope everyone had a great summit in Austin and got back safe! :)

At the design summit, we had a Neutron stadium evolution session, which
needs your immediate attention as it will impact many stakeholders of
Neutron.

To summarize for everyone, our Neutron leadership made the following
proposal for the “greater-good” of Neutron to improve and reduce burden on
the Neutron PTL and core team to avoid managing more Neutron drivers:

Quoting the etherpad [1]

"No request for inclusion are accepted for projects focussed solely on
implementations and/or API extensions to non-open solutions."

To summarize for everyone what this means is that all Neutron drivers,
which implement non open source networking backends are instantly out of
the Neutron stadium and are marked as "unofficial/unsupported/remotely
affiliated" and rest are capable of being tagged as "supported/official”.

This eliminates all commercial Neutron drivers developed for many service
providers and enterprises who have deployed OpenStack successfully with
these drivers. It’s unclear how the OpenStack Foundation will communicate
its stance with all the users but clearly this is a huge set back for
OpenStack and Neutron. Neutron will essentially become closed to all
existing, non-open drivers, even if these drivers have been compliant with
Neutron API for years and users have them deployed in production, forcing
users to re-evaluate their options.

Furthermore, this proposal will erode confidence in Neutron and OpenStack,
and destroy much of the value that the community has worked so hard to
build over the years.

As a representative and member of the OpenStack community and maintainer of
a Neutron driver (since Grizzly), I am deeply disappointed and disagree
with this statement [2]. Tossing out all the non-open solutions is not in
the best interest of the end user companies that have built working
OpenStack clusters. This proposal will lead OpenStack end users who
deployed different drivers to think twice about OpenStack communities’
commitment to deliver solutions they need. Furthermore, this proposal
punishes OpenStack companies who developed commercial backend drivers to
help end users bring up OpenStack clouds.

Also, we have to realize that this proposal divides the community rather
than unifying it. If it proceeds, it seems all OpenStack projects should
follow for consistency. For example, this should apply to Nova which means
HyperV and vShphere can't be part of Nova, PLUMgrid can't be part of Kuryr,
and ABC company cannot have a driver/plugin for a XYZ project.

Another thing to note is, for operators, the benefit is that the
flexibility up until now has allowed them to embark on successful OpenStack
deployments. For those operators, yanking out support they’ve come to
depend on makes things worse. While certain team members may prefer only
open-source technology, it’s better to let the end users make that decision
in the free competition of the marketplace without introducing notion of
official/supported vs unofficial/unsupported drivers purely based on
open-source nature of the driver backend despite having complete compliance
with the OpenStack ecosystem.

So if the Neutron PTL is over burdened, we should all help him somehow so
he does not have to make decisions and solve problems in a way that
OpenStack community breaks like this.

I hope we see people offer ideas, time to help and discuss this and that
our Neutron leadership understands the points I am raising and we can avoid
going towards such a route to prevent Neutron, OpenStack, and its ecosystem
from expanding so we continue to see "one" OpenStack community with one
open API.

[1]
https://etherpad.openstack.org/p/newton-neutron-community-stadium-evolution
[2] "No request for inclusion are accepted for projects focussed solely on
implementations and/or API extensions to non-open solutions."

Thanks,
Fawad Khaliq
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Live Migration: Austin summit update

2016-04-30 Thread Murray, Paul (HP Cloud)

Thanks Matt, I meant to cover CI but clearly omitted it. 


> On 30 Apr 2016, at 02:35, Matt Riedemann  wrote:
> 
>> On 4/29/2016 5:32 PM, Murray, Paul (HP Cloud) wrote:
>> The following summarizes status of the main topics relating to live
>> migration after the Newton design summit. Please feel free to correct
>> any inaccuracies or add additional information.
>> 
>> 
>> 
>> Paul
>> 
>> 
>> 
>> -
>> 
>> 
>> 
>> Libvirt storage pools
>> 
>> 
>> 
>> The storage pools work has been selected as one of the project review
>> priorities for Newton.
>> 
>> (see https://etherpad.openstack.org/p/newton-nova-summit-priorities )
>> 
>> 
>> 
>> Continuation of the libvirt storage pools work was discussed in the live
>> migration session. The proposal has grown to include a refactor of the
>> existing libvirt driver instance storage code. Justification for this is
>> based on three factors:
>> 
>> 1.   The code needs to be refactored to use storage pools
>> 
>> 2.   The code is complicated and uses inspection, poor practice
>> 
>> 3.   During the investigation Matt Booth discovered two CVEs in the
>> code – suggesting further work is justified
>> 
>> 
>> 
>> So the proposal is now to follow three stages:
>> 
>> 1.   Refactor the instance storage code
>> 
>> 2.   Adapt to use storage pools for the instance storage
>> 
>> 3.   Use storage pools to drive resize/migration
> 
> We also talked about the need for some additional test coverage for the 
> refactor work:
> 
> 1. A job that uses LVM on the experimental queue.
> 
> 2. ploop should be covered by the Virtuozzo Compute third party CI but we'll 
> need to double-check the test coverage there (is it running the tests that 
> hit the code paths being refactored). Note that they have their own blueprint 
> for implementing resize for ploop:
> 
> https://blueprints.launchpad.net/nova/+spec/virtuozzo-instance-resize-support
> 
> 3. Ceph testing - we already have a single-node job for Ceph that will test 
> the resize paths. We should also be testing Ceph-backed live migration in the 
> special live-migration job that Timofey has been working on.
> 
> 4. NFS testing - this also falls into the special live migration CI job that 
> will test live migration in different storage configurations within a single 
> run.
> 
>> 
>> 
>> 
>> Matt has code already starting the refactor and will continue with help
>> from Paul Carlton + Paul Murray. We will look for additional
>> contributors to help as we plan out the patches.
>> 
>> 
>> 
>> https://review.openstack.org/#/c/302117 : Persist libvirt instance
>> storage metadata
>> 
>> https://review.openstack.org/#/c/310505 : Use libvirt storage pools
>> 
>> https://review.openstack.org/#/c/310538 : Migrate libvirt volumes
>> 
>> 
>> 
>> Post copy
>> 
>> 
>> 
>> The spec to add post copy migration support in the libvirt driver was
>> discussed in the live migration session. Post copy guarantees completion
>> of a migration in linear time without needing to pause the VM. This can
>> be used as an alternative to pausing in live-migration-force-complete.
>> Pause or complete could also be invoked automatically under some
>> circumstances. The issue slowing these specs is how to decide which
>> method to use given they provide a different user experience but we
>> don’t want to expose virt specific features in the API. Two additional
>> specs listed below suggest possible generic ways to address the issue.
>> 
>> 
>> 
>> There was no conclusions reached in the session so the debate will
>> continue on the specs. The first below is the main spec for the feature.
>> 
>> 
>> 
>> https://review.openstack.org/#/c/301509 : Adds post-copy live migration
>> support to Nova
>> 
>> https://review.openstack.org/#/c/305425 : Define instance availability
>> profiles
>> 
>> https://review.openstack.org/#/c/306561 : Automatic Live Migration
>> Completion
>> 
>> 
>> 
>> Live Migration orchestrated via conductor
>> 
>> 
>> 
>> The proposal to move orchestration of live migration to conductor was
>> discussed in the working session on Friday, presented by Andrew Laski on
>> behalf of Timofey Durakov. This one threw up a lot of debate both for
>> and against the general idea, but not supporting the patches that have
>> been submitted along with the spec so far. The general feeling was that
>> we need to attack this, but need to take some simple first cleanup steps
>> first to get a better idea of the problem. Dan Smith proposed moving the
>> stateless pre-migration steps to a sequence of calls from conductor (as
>> opposed to the going back and forth between computes) as the first step.
>> 
>> 
>> 
>> https://review.openstack.org/#/c/292271 : Remove compute-compute
>> communication in live-migration
>> 
>> 
>> 
>> Cold and Live Migration Scheduling
>> 
>> 
>> 
>> When this patch merges all migrations will use the request spec for
>> scheduling: 

Re: [openstack-dev] [Fuel][MySQL][DLM][Oslo][DB][Trove][Galera][operators] Multi-master writes look OK, OCF RA and more things

2016-04-30 Thread Clint Byrum
Excerpts from Roman Podoliaka's message of 2016-04-29 12:04:49 -0700:
> Hi Bogdan,
> 
> Thank you for sharing this! I'll need to familiarize myself with this
> Jepsen thing, but overall it looks interesting.
> 
> As it turns out, we already run Galera in multi-writer mode in Fuel
> unintentionally in the case, when the active MySQL node goes down,
> HAProxy starts opening connections to a backup, then the active goes
> up again, HAProxy starts opening connections to the original MySQL
> node, but OpenStack services may still have connections opened to the
> backup in their connection pools - so now you may have connections to
> multiple MySQL nodes at the same time, exactly what you wanted to
> avoid by using active/backup in the HAProxy configuration.
> 
> ^ this actually leads to an interesting issue [1], when the DB state
> committed on one node is not immediately available on another one.
> Replication lag can be controlled  via session variables [2], but that
> does not always help: e.g. in [1] Nova first goes to Neutron to create
> a new floating IP, gets 201 (and Neutron actually *commits* the DB
> transaction) and then makes another REST API request to get a list of
> floating IPs by address - the latter can be served by another
> neutron-server, connected to another Galera node, which does not have
> the latest state applied yet due to 'slave lag' - it can happen that
> the list will be empty. Unfortunately, 'wsrep_sync_wait' can't help
> here, as it's two different REST API requests, potentially served by
> two different neutron-server instances.
> 

I'm curious why you think setting wsrep_sync_wait=1 wouldn't help.

The exact example appears in the Galera documentation:

http://galeracluster.com/documentation-webpages/mysqlwsrepoptions.html#wsrep-sync-wait

The moment you say 'SET SESSION wsrep_sync_wait=1', the behavior should
prevent the list problem you see, and it should not matter that it is
a separate session, as that is the entire point of the variable:

"When you enable this parameter, the node triggers causality checks in
response to certain types of queries. During the check, the node blocks
new queries while the database server catches up with all updates made
in the cluster to the point where the check was begun. Once it reaches
this point, the node executes the original query."

In the active/passive case where you never use the passive node as a
read slave, one could actually set wsrep_sync_wait=1 globally. This will
cause a ton of lag while new queries happen on the new active and old
transactions are still being applied, but that's exactly what you want,
so that when you fail over, nothing proceeds until all writes from the
original active node are applied and available on the new active node.
It would help if your failover technology actually _breaks_ connections
to a presumed dead node, so writes stop happening on the old one.

Also, If you thrash back and forth a bit, that could cause your app to
virtually freeze, but HAProxy and most other failover technologies allow
tuning timings so that you can stay off of a passive server long enough
to calm it down and fail more gracefully to it.

Anyway, this is why sometimes I do wonder if we'd be better off just
using MySQL with DRBD and good old pacemaker.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Proposed Revision to Magnum's Mission

2016-04-30 Thread Joshua Harlow

Cool, abandoning mine as it seems the team is working on one anyway.

(I just didn't want a mission statement change to wait until some 
unknown project comes along, someday in the future...)


-Josh

Davanum Srinivas wrote:

Adrian,

fyi, there's one more already filed by Josh -
https://review.openstack.org/#/c/310941/

-- Dims

On Fri, Apr 29, 2016 at 7:47 PM, Adrian Otto  wrote:

Magnum Team,

In accordance with our Fishbowl discussion yesterday at the Newton Design
Summit in Austin, I have proposed the following revision to Magnum’s mission
statement:

https://review.openstack.org/311476

The idea is to narrow the scope of our Magnum project to allow us to focus
on making popular COE software work great with OpenStack, and make it easy
for OpenStack cloud users to quickly set up fleets of cloud capacity managed
by chosen COE software (such as Swam, Kubernetes, Mesos, etc.). Cloud
operators and users will value Multi-Tenancy for COE’s, tight integration
with OpenStack, and the ability to source this all as a self-service
resource.

We agreed to deprecate and remove the /containers resource from Magnum’s
API, and will leave the door open for a new OpenStack project with its own
name and mission to satisfy the interests of our community members who want
an OpenStack API service that abstracts one or more COE’s.

Regards,

Adrian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote][kubernetes][infra] kolla-kubernetes repository management proposal up for vote

2016-04-30 Thread Jinay Vora (jvora)
Hi Steven,

Please add me to the bootstrap list too. :)

Thanks.

———
Regards,
Jinay Vora.




On 4/30/16, 10:11 AM, "Davanum Srinivas"  wrote:

>Steven,
>
>Please add me to the bootstrap list for the k8s repo
>
>Thanks,
>Dims
>
>On Sat, Apr 30, 2016 at 9:50 AM, Steven Dake (stdake)  wrote:
>> Fellow core reviewers,
>>
>> We had a fantastic turnout at our fishbowl kubernetes as an underlay for
>> Kolla session.  The etherpad documents the folks interested and discussion
>> at summit[1].
>>
>> This proposal is mostly based upon a combination of several discussions at
>> open design meetings coupled with the kubernetes underlay discussion.
>>
>> The proposal (and what we are voting on) is as follows:
>>
>> Folks in the following list will be added to a kolla-k8s-core group.
>>
>>  This kolla-k8s-core group will be responsible for code reviews and code
>> submissions to the kolla repository for the /kubernetes top level directory.
>> Individuals in kolla-k8s-core that consistently approve (+2) or disapprove
>> with a (-2) votes to TLD directories other then kubernetes will be handled
>> on a case by case basis with several "training warnings" followed by removal
>> of the kolla-k8s-core group.  The kolla-k8s-core group will be added as a
>> subgroup of the kolla-core reviewer team, which means they in effect have
>> all of the ACL access as the existing kolla repository.  I think it is
>> better in this case to trust these individuals to the right thing and only
>> approve changes for the kubernetes directory until proposed for the
>> kolla-core reviewer group where they can gate changes to any part of the
>> repository.
>>
>> Britt Houser
>>
>> mark casey
>>
>> Steven Dake (delta-alpha-kilo-echo)
>>
>> Michael Schmidt
>>
>> Marian Schwarz
>>
>> Andrew Battye
>>
>> Kevin Fox (kfox)
>>
>> Sidharth Surana (ssurana)
>>
>>  Michal Rostecki (mrostecki)
>>
>>   Swapnil Kulkarni (coolsvap)
>>
>>   MD NADEEM (mail2nadeem92)
>>
>>   Vikram Hosakote (vhosakot)
>>
>>   Jeff Peeler (jpeeler)
>>
>>   Martin Andre (mandre)
>>
>>   Ian Main (Slower)
>>
>> Hui Kang (huikang)
>>
>> Serguei Bezverkhi (sbezverk)
>>
>> Alex Polvi (polvi)
>>
>> Rob Mason
>>
>> Alicja Kwasniewska
>>
>> sean mooney (sean-k-mooney)
>>
>> Keith Byrne (kbyrne)
>>
>> Zdenek Janda (xdeu)
>>
>> Brandon Jozsa (v1k0d3n)
>>
>> Rajath Agasthya (rajathagasthya)
>>
>>
>> If you already are in the kolla-core review team, you won't be added to the
>> kolla-k8s-core team as you will already have the necessary ACLs to do the
>> job.  If you feel you would like to join this initial bootstrapping process,
>> please add your name to the etherpad in [1].
>>
>> After 8 weeks (July 15h), folks that have not been actively reviewing or
>> committing code will be removed from the kolla-k8s-core group.  We will use
>> the governance repository metrics for team size [2] which is either 30
>> reviews over 6 months (in this case, 10 reviews), or 6 commits over 6 months
>> (in this case 2 commits) to the repository.  Folks that don't meet the
>> qualifications are still welcome to commit to the repository and contribute
>> code or documentation but will lose approval rights on patches.
>>
>> The kubernetes codebase will be maintained in the
>> https://github.com/openstack/kolla repository under the kubernees top level
>> directory.  Contributors that become active in the kolla repository itself
>> will be proposed over time to the kolla-core group.  Only core-kolla members
>> will be permitted to participate in policy decisions and voting thereof, so
>> there is some minimal extra responsibility involved in joining the
>> kolla-core ACL team for those folks wanting to move into the kolla core team
>> over time.  The goal will be to over time entirely remove the kolla-k8s-core
>> team and make one core reviewer team in the kolla-core ACL.
>>
>> Members in the kolla-k8s-core group will have the ability to +2 or –2 any
>> change to the main kolla repository via ACLs, however, I propose we trust
>> these folks to only +2/-2 changes related to the kubernetes directory in the
>> kolla repository and remove folks that consistently break this agreement.
>> Initial errors as folks learn the system will be tolerated and commits
>> reverted as makes sense.
>>
>> I feel we made a couple errors with the creation of Kolla-mesos that I feel
>> needs correction.  Our first error the kolla-mesos-core team made a lack of
>> a diversely affiliated team membership developing the code base.  The above
>> list has significant diversity.  The second error is that the repository was
>> split in the first place.  This resulted in a separate ABI to the containers
>> being implemented which was a sore spot for me personally.  We did our best
>> to build both sides of the bridge here, but this time I'd like the bridge
>> between these two interests and set of individuals to be fully built before
>> beginning.  As such, I'd ask the existing kolla-core team to trust 

Re: [openstack-dev] [nova] next min libvirt?

2016-04-30 Thread Thomas Bechtold
Hi,

On Fri, Apr 29, 2016 at 10:13:42AM -0500, Sean Dague wrote:
> We've just landed the libvirt min to bump us up to 1.2.1 required. It's
> probably a good time consider the appropriate bump for Otaca.
> 
> By that time our Ubuntu LTS will be 16.04 (libvirt 1.3.1), RHEL 7.1
> (1.2.8). Additionally Debian Jessie has 1.2.9. RHEL 7.2 is at 1.2.17.
>
> My suggestion is we set MIN_LIBVIRT_VERSION to 1.2.8. This will mean
> that NUMA support in libvirt (excepting the blacklists) and huge page
> support is assumed on x86_64.

Works also for SUSE which has 1.2.18 already in SLE 12 SP1.

-- 
Tom

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote][kubernetes][infra] kolla-kubernetes repository management proposal up for vote

2016-04-30 Thread Davanum Srinivas
Steven,

Please add me to the bootstrap list for the k8s repo

Thanks,
Dims

On Sat, Apr 30, 2016 at 9:50 AM, Steven Dake (stdake)  wrote:
> Fellow core reviewers,
>
> We had a fantastic turnout at our fishbowl kubernetes as an underlay for
> Kolla session.  The etherpad documents the folks interested and discussion
> at summit[1].
>
> This proposal is mostly based upon a combination of several discussions at
> open design meetings coupled with the kubernetes underlay discussion.
>
> The proposal (and what we are voting on) is as follows:
>
> Folks in the following list will be added to a kolla-k8s-core group.
>
>  This kolla-k8s-core group will be responsible for code reviews and code
> submissions to the kolla repository for the /kubernetes top level directory.
> Individuals in kolla-k8s-core that consistently approve (+2) or disapprove
> with a (-2) votes to TLD directories other then kubernetes will be handled
> on a case by case basis with several "training warnings" followed by removal
> of the kolla-k8s-core group.  The kolla-k8s-core group will be added as a
> subgroup of the kolla-core reviewer team, which means they in effect have
> all of the ACL access as the existing kolla repository.  I think it is
> better in this case to trust these individuals to the right thing and only
> approve changes for the kubernetes directory until proposed for the
> kolla-core reviewer group where they can gate changes to any part of the
> repository.
>
> Britt Houser
>
> mark casey
>
> Steven Dake (delta-alpha-kilo-echo)
>
> Michael Schmidt
>
> Marian Schwarz
>
> Andrew Battye
>
> Kevin Fox (kfox)
>
> Sidharth Surana (ssurana)
>
>  Michal Rostecki (mrostecki)
>
>   Swapnil Kulkarni (coolsvap)
>
>   MD NADEEM (mail2nadeem92)
>
>   Vikram Hosakote (vhosakot)
>
>   Jeff Peeler (jpeeler)
>
>   Martin Andre (mandre)
>
>   Ian Main (Slower)
>
> Hui Kang (huikang)
>
> Serguei Bezverkhi (sbezverk)
>
> Alex Polvi (polvi)
>
> Rob Mason
>
> Alicja Kwasniewska
>
> sean mooney (sean-k-mooney)
>
> Keith Byrne (kbyrne)
>
> Zdenek Janda (xdeu)
>
> Brandon Jozsa (v1k0d3n)
>
> Rajath Agasthya (rajathagasthya)
>
>
> If you already are in the kolla-core review team, you won't be added to the
> kolla-k8s-core team as you will already have the necessary ACLs to do the
> job.  If you feel you would like to join this initial bootstrapping process,
> please add your name to the etherpad in [1].
>
> After 8 weeks (July 15h), folks that have not been actively reviewing or
> committing code will be removed from the kolla-k8s-core group.  We will use
> the governance repository metrics for team size [2] which is either 30
> reviews over 6 months (in this case, 10 reviews), or 6 commits over 6 months
> (in this case 2 commits) to the repository.  Folks that don't meet the
> qualifications are still welcome to commit to the repository and contribute
> code or documentation but will lose approval rights on patches.
>
> The kubernetes codebase will be maintained in the
> https://github.com/openstack/kolla repository under the kubernees top level
> directory.  Contributors that become active in the kolla repository itself
> will be proposed over time to the kolla-core group.  Only core-kolla members
> will be permitted to participate in policy decisions and voting thereof, so
> there is some minimal extra responsibility involved in joining the
> kolla-core ACL team for those folks wanting to move into the kolla core team
> over time.  The goal will be to over time entirely remove the kolla-k8s-core
> team and make one core reviewer team in the kolla-core ACL.
>
> Members in the kolla-k8s-core group will have the ability to +2 or –2 any
> change to the main kolla repository via ACLs, however, I propose we trust
> these folks to only +2/-2 changes related to the kubernetes directory in the
> kolla repository and remove folks that consistently break this agreement.
> Initial errors as folks learn the system will be tolerated and commits
> reverted as makes sense.
>
> I feel we made a couple errors with the creation of Kolla-mesos that I feel
> needs correction.  Our first error the kolla-mesos-core team made a lack of
> a diversely affiliated team membership developing the code base.  The above
> list has significant diversity.  The second error is that the repository was
> split in the first place.  This resulted in a separate ABI to the containers
> being implemented which was a sore spot for me personally.  We did our best
> to build both sides of the bridge here, but this time I'd like the bridge
> between these two interests and set of individuals to be fully built before
> beginning.  As such, I'd ask the existing kolla-core team to trust my
> judgement on this point and roll with it.  We can always change the
> structure later if this model doesn't work out as I expect it will, but if
> we started with split repos and a changed structure to begin with, we can't
> go back to a non-split repo as the action is irreversible 

Re: [openstack-dev] [kolla][vote][kubernetes][infra] kolla-kubernetes repository management proposal up for vote

2016-04-30 Thread Michał Jastrzębski
Also +1 :)

On 30 April 2016 at 09:58, Michał Jastrzębski  wrote:
> Add me too please Steven.
>
> On 30 April 2016 at 09:50, Steven Dake (stdake)  wrote:
>> Fellow core reviewers,
>>
>> We had a fantastic turnout at our fishbowl kubernetes as an underlay for
>> Kolla session.  The etherpad documents the folks interested and discussion
>> at summit[1].
>>
>> This proposal is mostly based upon a combination of several discussions at
>> open design meetings coupled with the kubernetes underlay discussion.
>>
>> The proposal (and what we are voting on) is as follows:
>>
>> Folks in the following list will be added to a kolla-k8s-core group.
>>
>>  This kolla-k8s-core group will be responsible for code reviews and code
>> submissions to the kolla repository for the /kubernetes top level directory.
>> Individuals in kolla-k8s-core that consistently approve (+2) or disapprove
>> with a (-2) votes to TLD directories other then kubernetes will be handled
>> on a case by case basis with several "training warnings" followed by removal
>> of the kolla-k8s-core group.  The kolla-k8s-core group will be added as a
>> subgroup of the kolla-core reviewer team, which means they in effect have
>> all of the ACL access as the existing kolla repository.  I think it is
>> better in this case to trust these individuals to the right thing and only
>> approve changes for the kubernetes directory until proposed for the
>> kolla-core reviewer group where they can gate changes to any part of the
>> repository.
>>
>> Britt Houser
>>
>> mark casey
>>
>> Steven Dake (delta-alpha-kilo-echo)
>>
>> Michael Schmidt
>>
>> Marian Schwarz
>>
>> Andrew Battye
>>
>> Kevin Fox (kfox)
>>
>> Sidharth Surana (ssurana)
>>
>>  Michal Rostecki (mrostecki)
>>
>>   Swapnil Kulkarni (coolsvap)
>>
>>   MD NADEEM (mail2nadeem92)
>>
>>   Vikram Hosakote (vhosakot)
>>
>>   Jeff Peeler (jpeeler)
>>
>>   Martin Andre (mandre)
>>
>>   Ian Main (Slower)
>>
>> Hui Kang (huikang)
>>
>> Serguei Bezverkhi (sbezverk)
>>
>> Alex Polvi (polvi)
>>
>> Rob Mason
>>
>> Alicja Kwasniewska
>>
>> sean mooney (sean-k-mooney)
>>
>> Keith Byrne (kbyrne)
>>
>> Zdenek Janda (xdeu)
>>
>> Brandon Jozsa (v1k0d3n)
>>
>> Rajath Agasthya (rajathagasthya)
>>
>>
>> If you already are in the kolla-core review team, you won't be added to the
>> kolla-k8s-core team as you will already have the necessary ACLs to do the
>> job.  If you feel you would like to join this initial bootstrapping process,
>> please add your name to the etherpad in [1].
>>
>> After 8 weeks (July 15h), folks that have not been actively reviewing or
>> committing code will be removed from the kolla-k8s-core group.  We will use
>> the governance repository metrics for team size [2] which is either 30
>> reviews over 6 months (in this case, 10 reviews), or 6 commits over 6 months
>> (in this case 2 commits) to the repository.  Folks that don't meet the
>> qualifications are still welcome to commit to the repository and contribute
>> code or documentation but will lose approval rights on patches.
>>
>> The kubernetes codebase will be maintained in the
>> https://github.com/openstack/kolla repository under the kubernees top level
>> directory.  Contributors that become active in the kolla repository itself
>> will be proposed over time to the kolla-core group.  Only core-kolla members
>> will be permitted to participate in policy decisions and voting thereof, so
>> there is some minimal extra responsibility involved in joining the
>> kolla-core ACL team for those folks wanting to move into the kolla core team
>> over time.  The goal will be to over time entirely remove the kolla-k8s-core
>> team and make one core reviewer team in the kolla-core ACL.
>>
>> Members in the kolla-k8s-core group will have the ability to +2 or –2 any
>> change to the main kolla repository via ACLs, however, I propose we trust
>> these folks to only +2/-2 changes related to the kubernetes directory in the
>> kolla repository and remove folks that consistently break this agreement.
>> Initial errors as folks learn the system will be tolerated and commits
>> reverted as makes sense.
>>
>> I feel we made a couple errors with the creation of Kolla-mesos that I feel
>> needs correction.  Our first error the kolla-mesos-core team made a lack of
>> a diversely affiliated team membership developing the code base.  The above
>> list has significant diversity.  The second error is that the repository was
>> split in the first place.  This resulted in a separate ABI to the containers
>> being implemented which was a sore spot for me personally.  We did our best
>> to build both sides of the bridge here, but this time I'd like the bridge
>> between these two interests and set of individuals to be fully built before
>> beginning.  As such, I'd ask the existing kolla-core team to trust my
>> judgement on this point and roll with it.  We can always change the
>> structure later if this model doesn't work out as I expect it 

Re: [openstack-dev] [kolla][vote][kubernetes][infra] kolla-kubernetes repository management proposal up for vote

2016-04-30 Thread Michał Jastrzębski
Add me too please Steven.

On 30 April 2016 at 09:50, Steven Dake (stdake)  wrote:
> Fellow core reviewers,
>
> We had a fantastic turnout at our fishbowl kubernetes as an underlay for
> Kolla session.  The etherpad documents the folks interested and discussion
> at summit[1].
>
> This proposal is mostly based upon a combination of several discussions at
> open design meetings coupled with the kubernetes underlay discussion.
>
> The proposal (and what we are voting on) is as follows:
>
> Folks in the following list will be added to a kolla-k8s-core group.
>
>  This kolla-k8s-core group will be responsible for code reviews and code
> submissions to the kolla repository for the /kubernetes top level directory.
> Individuals in kolla-k8s-core that consistently approve (+2) or disapprove
> with a (-2) votes to TLD directories other then kubernetes will be handled
> on a case by case basis with several "training warnings" followed by removal
> of the kolla-k8s-core group.  The kolla-k8s-core group will be added as a
> subgroup of the kolla-core reviewer team, which means they in effect have
> all of the ACL access as the existing kolla repository.  I think it is
> better in this case to trust these individuals to the right thing and only
> approve changes for the kubernetes directory until proposed for the
> kolla-core reviewer group where they can gate changes to any part of the
> repository.
>
> Britt Houser
>
> mark casey
>
> Steven Dake (delta-alpha-kilo-echo)
>
> Michael Schmidt
>
> Marian Schwarz
>
> Andrew Battye
>
> Kevin Fox (kfox)
>
> Sidharth Surana (ssurana)
>
>  Michal Rostecki (mrostecki)
>
>   Swapnil Kulkarni (coolsvap)
>
>   MD NADEEM (mail2nadeem92)
>
>   Vikram Hosakote (vhosakot)
>
>   Jeff Peeler (jpeeler)
>
>   Martin Andre (mandre)
>
>   Ian Main (Slower)
>
> Hui Kang (huikang)
>
> Serguei Bezverkhi (sbezverk)
>
> Alex Polvi (polvi)
>
> Rob Mason
>
> Alicja Kwasniewska
>
> sean mooney (sean-k-mooney)
>
> Keith Byrne (kbyrne)
>
> Zdenek Janda (xdeu)
>
> Brandon Jozsa (v1k0d3n)
>
> Rajath Agasthya (rajathagasthya)
>
>
> If you already are in the kolla-core review team, you won't be added to the
> kolla-k8s-core team as you will already have the necessary ACLs to do the
> job.  If you feel you would like to join this initial bootstrapping process,
> please add your name to the etherpad in [1].
>
> After 8 weeks (July 15h), folks that have not been actively reviewing or
> committing code will be removed from the kolla-k8s-core group.  We will use
> the governance repository metrics for team size [2] which is either 30
> reviews over 6 months (in this case, 10 reviews), or 6 commits over 6 months
> (in this case 2 commits) to the repository.  Folks that don't meet the
> qualifications are still welcome to commit to the repository and contribute
> code or documentation but will lose approval rights on patches.
>
> The kubernetes codebase will be maintained in the
> https://github.com/openstack/kolla repository under the kubernees top level
> directory.  Contributors that become active in the kolla repository itself
> will be proposed over time to the kolla-core group.  Only core-kolla members
> will be permitted to participate in policy decisions and voting thereof, so
> there is some minimal extra responsibility involved in joining the
> kolla-core ACL team for those folks wanting to move into the kolla core team
> over time.  The goal will be to over time entirely remove the kolla-k8s-core
> team and make one core reviewer team in the kolla-core ACL.
>
> Members in the kolla-k8s-core group will have the ability to +2 or –2 any
> change to the main kolla repository via ACLs, however, I propose we trust
> these folks to only +2/-2 changes related to the kubernetes directory in the
> kolla repository and remove folks that consistently break this agreement.
> Initial errors as folks learn the system will be tolerated and commits
> reverted as makes sense.
>
> I feel we made a couple errors with the creation of Kolla-mesos that I feel
> needs correction.  Our first error the kolla-mesos-core team made a lack of
> a diversely affiliated team membership developing the code base.  The above
> list has significant diversity.  The second error is that the repository was
> split in the first place.  This resulted in a separate ABI to the containers
> being implemented which was a sore spot for me personally.  We did our best
> to build both sides of the bridge here, but this time I'd like the bridge
> between these two interests and set of individuals to be fully built before
> beginning.  As such, I'd ask the existing kolla-core team to trust my
> judgement on this point and roll with it.  We can always change the
> structure later if this model doesn't work out as I expect it will, but if
> we started with split repos and a changed structure to begin with, we can't
> go back to a non-split repo as the action is irreversible according to dims.
>
> I know this proposal may seem 

[openstack-dev] [kolla][vote][kubernetes][infra] kolla-kubernetes repository management proposal up for vote

2016-04-30 Thread Steven Dake (stdake)
Fellow core reviewers,

We had a fantastic turnout at our fishbowl kubernetes as an underlay for Kolla 
session.  The etherpad documents the folks interested and discussion at 
summit[1].

This proposal is mostly based upon a combination of several discussions at open 
design meetings coupled with the kubernetes underlay discussion.

The proposal (and what we are voting on) is as follows:

Folks in the following list will be added to a kolla-k8s-core group.

 This kolla-k8s-core group will be responsible for code reviews and code 
submissions to the kolla repository for the /kubernetes top level directory.  
Individuals in kolla-k8s-core that consistently approve (+2) or disapprove with 
a (-2) votes to TLD directories other then kubernetes will be handled on a case 
by case basis with several "training warnings" followed by removal of the 
kolla-k8s-core group.  The kolla-k8s-core group will be added as a subgroup of 
the kolla-core reviewer team, which means they in effect have all of the ACL 
access as the existing kolla repository.  I think it is better in this case to 
trust these individuals to the right thing and only approve changes for the 
kubernetes directory until proposed for the kolla-core reviewer group where 
they can gate changes to any part of the repository.


  *   Britt Houser

  *   mark casey

  *   Steven Dake (delta-alpha-kilo-echo)

  *   Michael Schmidt

  *   Marian Schwarz

  *   Andrew Battye

  *   Kevin Fox (kfox)

  *   Sidharth Surana (ssurana)

  *Michal Rostecki (mrostecki)

  * Swapnil Kulkarni (coolsvap)

  * MD NADEEM (mail2nadeem92)

  * Vikram Hosakote (vhosakot)

  * Jeff Peeler (jpeeler)

  * Martin Andre (mandre)

  * Ian Main (Slower)

  *   Hui Kang (huikang)

  *   Serguei Bezverkhi (sbezverk)

  *   Alex Polvi (polvi)

  *   Rob Mason

  *   Alicja Kwasniewska

  *   sean mooney (sean-k-mooney)

  *   Keith Byrne (kbyrne)

  *   Zdenek Janda (xdeu)

  *   Brandon Jozsa (v1k0d3n)

  *   Rajath Agasthya (rajathagasthya)

If you already are in the kolla-core review team, you won't be added to the 
kolla-k8s-core team as you will already have the necessary ACLs to do the job.  
If you feel you would like to join this initial bootstrapping process, please 
add your name to the etherpad in [1].

After 8 weeks (July 15h), folks that have not been actively reviewing or 
committing code will be removed from the kolla-k8s-core group.  We will use the 
governance repository metrics for team size [2] which is either 30 reviews over 
6 months (in this case, 10 reviews), or 6 commits over 6 months (in this case 2 
commits) to the repository.  Folks that don't meet the qualifications are still 
welcome to commit to the repository and contribute code or documentation but 
will lose approval rights on patches.

The kubernetes codebase will be maintained in the 
https://github.com/openstack/kolla 
repository under the kubernees top level directory.  Contributors that become 
active in the kolla repository itself will be proposed over time to the 
kolla-core group.  Only core-kolla members will be permitted to participate in 
policy decisions and voting thereof, so there is some minimal extra 
responsibility involved in joining the kolla-core ACL team for those folks 
wanting to move into the kolla core team over time.  The goal will be to over 
time entirely remove the kolla-k8s-core team and make one core reviewer team in 
the kolla-core ACL.

Members in the kolla-k8s-core group will have the ability to +2 or –2 any 
change to the main kolla repository via ACLs, however, I propose we trust these 
folks to only +2/-2 changes related to the kubernetes directory in the kolla 
repository and remove folks that consistently break this agreement.  Initial 
errors as folks learn the system will be tolerated and commits reverted as 
makes sense.

I feel we made a couple errors with the creation of Kolla-mesos that I feel 
needs correction.  Our first error the kolla-mesos-core team made a lack of a 
diversely affiliated team membership developing the code base.  The above list 
has significant diversity.  The second error is that the repository was split 
in the first place.  This resulted in a separate ABI to the containers being 
implemented which was a sore spot for me personally.  We did our best to build 
both sides of the bridge here, but this time I'd like the bridge between these 
two interests and set of individuals to be fully built before beginning.  As 
such, I'd ask the existing kolla-core team to trust my judgement on this point 
and roll with it.  We can always change the structure later if this model 
doesn't work out as I expect it will, but if we started with split repos and a 
changed structure to begin with, we can't go back to a non-split repo as the 
action is irreversible according to dims.

I know this proposal may seem uncomfortable for our existing kolla-core team.  
I can assure you based 

Re: [openstack-dev] [Fuel][MySQL][DLM][Oslo][DB][Trove][Galera][operators] Multi-master writes look OK, OCF RA and more things

2016-04-30 Thread Mike Bayer



On 04/30/2016 02:57 AM, bdobre...@mirantis.com wrote:

Hi Roman.
That's interesting, although’s hard to believe (there is no slave lag in
galera multi master). I can only suggest us to create another jepsen
test to verify exactly scenario you describe. As well as other OpenStack
specific patterns.



There is definitely slave lag in Galera and it can be controlled using 
the wsrep_causal_reads_flag.


Demonstration script, whose results I have confirmed separately using 
Pythons scripts, is at:


https://www.percona.com/blog/2013/03/03/investigating-replication-
latency-in-percona-xtradb-cluster/  




Regards,
Bogdan.

*Od:* Roman Podoliaka 
*Wysłano:* ‎piątek‎, ‎29‎ ‎kwietnia‎ ‎2016 ‎21‎:‎04
*Do:* OpenStack Development Mailing List (not for usage questions)

*DW:* openstack-operat...@lists.openstack.org


Hi Bogdan,

Thank you for sharing this! I'll need to familiarize myself with this
Jepsen thing, but overall it looks interesting.

As it turns out, we already run Galera in multi-writer mode in Fuel
unintentionally in the case, when the active MySQL node goes down,
HAProxy starts opening connections to a backup, then the active goes
up again, HAProxy starts opening connections to the original MySQL
node, but OpenStack services may still have connections opened to the
backup in their connection pools - so now you may have connections to
multiple MySQL nodes at the same time, exactly what you wanted to
avoid by using active/backup in the HAProxy configuration.

^ this actually leads to an interesting issue [1], when the DB state
committed on one node is not immediately available on another one.
Replication lag can be controlled  via session variables [2], but that
does not always help: e.g. in [1] Nova first goes to Neutron to create
a new floating IP, gets 201 (and Neutron actually *commits* the DB
transaction) and then makes another REST API request to get a list of
floating IPs by address - the latter can be served by another
neutron-server, connected to another Galera node, which does not have
the latest state applied yet due to 'slave lag' - it can happen that
the list will be empty. Unfortunately, 'wsrep_sync_wait' can't help
here, as it's two different REST API requests, potentially served by
two different neutron-server instances.

Basically, you'd need to *always* wait for the latest state to be
applied before executing any queries, which Galera is trying to avoid
for performance reasons.

Thanks,
Roman

[1] https://bugs.launchpad.net/fuel/+bug/1529937
[2]
http://galeracluster.com/2015/06/achieving-read-after-write-semantics-with-galera/

On Fri, Apr 22, 2016 at 10:42 AM, Bogdan Dobrelya
 wrote:
 > [crossposting to openstack-operat...@lists.openstack.org]
 >
 > Hello.
 > I wrote this paper [0] to demonstrate an approach how we can leverage a
 > Jepsen framework for QA/CI/CD pipeline for OpenStack projects like Oslo
 > (DB) or Trove, Tooz DLM and perhaps for any integration projects which
 > rely on distributed systems. Although all tests are yet to be finished,
 > results are quite visible, so I better off share early for a review,
 > discussion and comments.
 >
 > I have similar tests done for the RabbitMQ OCF RA clusterers as well,
 > although have yet wrote a report.
 >
 > PS. I'm sorry for so many tags I placed in the topic header, should I've
 > used just "all" :) ? Have a nice weekends and take care!
 >
 > [0] https://goo.gl/VHyIIE
 >
 > --
 > Best regards,
 > Bogdan Dobrelya,
 > Irc #bogdando
 >
 >
 >
 >
__
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][MySQL][DLM][Oslo][DB][Trove][Galera][operators] Multi-master writes look OK, OCF RA and more things

2016-04-30 Thread bdobrelia
Hi Roman.

That's interesting, although’s hard to believe (there is no slave lag in galera 
multi master). I can only suggest us to create another jepsen test to verify 
exactly scenario you describe. As well as other OpenStack specific patterns.






Regards,

Bogdan.





Od: Roman Podoliaka
Wysłano: ‎piątek‎, ‎29‎ ‎kwietnia‎ ‎2016 ‎21‎:‎04
Do: OpenStack Development Mailing List (not for usage questions)
DW: openstack-operat...@lists.openstack.org





Hi Bogdan,

Thank you for sharing this! I'll need to familiarize myself with this
Jepsen thing, but overall it looks interesting.

As it turns out, we already run Galera in multi-writer mode in Fuel
unintentionally in the case, when the active MySQL node goes down,
HAProxy starts opening connections to a backup, then the active goes
up again, HAProxy starts opening connections to the original MySQL
node, but OpenStack services may still have connections opened to the
backup in their connection pools - so now you may have connections to
multiple MySQL nodes at the same time, exactly what you wanted to
avoid by using active/backup in the HAProxy configuration.

^ this actually leads to an interesting issue [1], when the DB state
committed on one node is not immediately available on another one.
Replication lag can be controlled  via session variables [2], but that
does not always help: e.g. in [1] Nova first goes to Neutron to create
a new floating IP, gets 201 (and Neutron actually *commits* the DB
transaction) and then makes another REST API request to get a list of
floating IPs by address - the latter can be served by another
neutron-server, connected to another Galera node, which does not have
the latest state applied yet due to 'slave lag' - it can happen that
the list will be empty. Unfortunately, 'wsrep_sync_wait' can't help
here, as it's two different REST API requests, potentially served by
two different neutron-server instances.

Basically, you'd need to *always* wait for the latest state to be
applied before executing any queries, which Galera is trying to avoid
for performance reasons.

Thanks,
Roman

[1] https://bugs.launchpad.net/fuel/+bug/1529937
[2] 
http://galeracluster.com/2015/06/achieving-read-after-write-semantics-with-galera/

On Fri, Apr 22, 2016 at 10:42 AM, Bogdan Dobrelya
 wrote:
> [crossposting to openstack-operat...@lists.openstack.org]
>
> Hello.
> I wrote this paper [0] to demonstrate an approach how we can leverage a
> Jepsen framework for QA/CI/CD pipeline for OpenStack projects like Oslo
> (DB) or Trove, Tooz DLM and perhaps for any integration projects which
> rely on distributed systems. Although all tests are yet to be finished,
> results are quite visible, so I better off share early for a review,
> discussion and comments.
>
> I have similar tests done for the RabbitMQ OCF RA clusterers as well,
> although have yet wrote a report.
>
> PS. I'm sorry for so many tags I placed in the topic header, should I've
> used just "all" :) ? Have a nice weekends and take care!
>
> [0] https://goo.gl/VHyIIE
>
> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev