On 06/22/2017 11:59 AM, Fox, Kevin M wrote:
My $0.02.

That view of dependencies is why Kubernetes development is outpacing OpenStacks 
and some users are leaving IMO. Not trying to be mean here but trying to shine 
some light on this issue.

Kubernetes at its core has essentially something kind of equivalent to keystone 
(k8s rbac), nova (container mgmt), cinder (pv/pvc/storageclasses), heat with 
convergence (deployments/daemonsets/etc), barbican (secrets), designate 
(kube-dns), and octavia (kube-proxy,svc,ingress) in one unit. Ops dont have to 
work hard to get all of it, users can assume its all there, and devs don't have 
many silo's to cross to implement features that touch multiple pieces.

This core functionality being combined has allowed them to land features that 
are really important to users but has proven difficult for OpenStack to do 
because of the silo's. OpenStack's general pattern has been, stand up a new 
service for new feature, then no one wants to depend on it so its ignored and 
each silo reimplements a lesser version of it themselves.

The OpenStack commons then continues to suffer.

We need to stop this destructive cycle.

OpenStack needs to figure out how to increase its commons. Both internally and 
externally. etcd as a common service was a step in the right direction.

+1 to this, and it's a similar theme to my dismay a few weeks ago when I realized projects are looking to ditch oslo rather than improve it; since then I got to chase down a totally avoidable problem in Zaqar that's been confusing dozens of people because zaqar implemented their database layer as direct-to-SQLAlchemy rather than using oslo.db (https://bugs.launchpad.net/tripleo/+bug/1691951) and missed out on some basic stability features that oslo.db turns on.

There is a balance to be struck between monolithic and expansive for sure, but I think the monolith-phobia may be affecting the quality of the product. It is possible to have clean modularity and separation of concerns in a codebase while still having tighter dependencies, it just takes more discipline to monitor the boundaries.



I think k8s needs to be another common service all the others can rely on. That 
could greatly simplify the rest of the OpenStack projects as a lot of its 
functionality no longer has to be implemented in each project.

We also need a way to break down the silo walls and allow more cross project 
collaboration for features. I fear the new push for letting projects run 
standalone will make this worse, not better, further fracturing OpenStack.

Thanks,
Kevin
________________________________________
From: Thierry Carrez [thie...@openstack.org]
Sent: Thursday, June 22, 2017 12:58 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

Fox, Kevin M wrote:
[...]
If you build a Tessmaster clone just to do mariadb, then you share nothing with 
the other communities and have to reinvent the wheel, yet again. Operators load 
increases because the tool doesn't function like other tools.

If you rely on a container orchestration engine that's already cross cloud that 
can be easily deployed by user or cloud operator, and fill in the gaps with 
what Trove wants to support, easy management of db's, you get to reuse a lot of 
the commons and the users slight increase in investment in dealing with the bit 
of extra plumbing in there allows other things to also be easily added to their 
cluster. Its very rare that a user would need to deploy/manage only a database. 
The net load on the operator decreases, not increases.

I think the user-side tool could totally deploy on Kubernetes clusters
-- if that was the only possible target that would make it a Kubernetes
tool more than an open infrastructure tool, but that's definitely a
possibility. I'm not sure work is needed there though, there are already
tools (or charts) doing that ?

For a server-side approach where you want to provide a DB-provisioning
API, I fear that making the functionality depend on K8s would make
TroveV2/Hoard would not only depend on Heat and Nova, but also depend on
something that would deploy a Kubernetes cluster (Magnum?), which would
likely hurt its adoption (and reusability in simpler setups). Since
databases would just work perfectly well in VMs, it feels like a
gratuitous dependency addition ?

We generally need to be very careful about creating dependencies between
OpenStack projects. On one side there are base services (like Keystone)
that we said it was alright to depend on, but depending on anything else
is likely to reduce adoption. Magnum adoption suffers from its
dependency on Heat. If Heat starts depending on Zaqar, we make the
problem worse. I understand it's a hard trade-off: you want to reuse
functionality rather than reinvent it in every project... we just need
to recognize the cost of doing that.

--
Thierry Carrez (ttx)

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to