On 02/05/2016 01:17 PM, Doug Hellmann wrote:
Excerpts from Ryan Brown's message of 2016-02-05 12:14:34 -0500:
On 02/05/2016 05:57 AM, Thierry Carrez wrote:
Hi everyone,

Even before OpenStack had a name, our "Four Opens" principles were
created to define how we would operate as a community. The first open,
"Open Source", added the following precision: "We do not produce 'open
core' software". What does this mean in 2016 ?

Back in 2010 when OpenStack was started, this was a key difference with
the other open source cloud platform (Eucalyptus) which was following an
Open Core strategy with a crippled community edition and an "enterprise
version". OpenStack was then the property of a single entity
(Rackspace), so giving strong signals that we would never follow such a
strategy was essential to form a real community.

Fast-forward today, the open source project is driven by a non-profit
independent Foundation, which could not even do an "enterprise edition"
if it wanted to. However, member companies build "enterprise products"
on top of the Apache-licensed upstream project. And we have drivers that
expose functionality in proprietary components. So what does it mean to
"not do open core" in 2016 ? What is acceptable and what's not ? It is
time for us to refresh this.

My personal take on that is that we can draw a line in the sand for what
is acceptable as an official project in the upstream OpenStack open
source effort. It should have a fully-functional, production-grade open
source implementation. If you need proprietary software or a commercial
entity to fully use the functionality of a project or getting serious
about it, then it should not be accepted in OpenStack as an official
project. It can still live as a non-official project and even be hosted
under OpenStack infrastructure, but it should not be part of
"OpenStack". That is how I would interpret "no open core" in OpenStack
2016.

Of course, the devil is in the details, especially around what I mean by
"fully-functional" and "production-grade". Is it just an API/stability
thing, or does performance/scalability come into account ? There will
always be some subjectivity there, but I think it's a good place to start.

Comments ?

If a project isn't fully functional* then why would we accept it at all?
Imagine this scenario:

1) Heat didn't exist
2) A project exactly like heat applies for OpenStack, that lets you use
templates to create resources to a specification
3) BUT, if you don't buy Proprietary Enterprise Template Parsing
Platform 9, a product of Shed Cat Enterpise Leopards**, you can't parse
templates longer than 200 characters.

There's a more concrete case being considered right now that is less
clear to some [1].
>
The Poppy project provides an open source service to wrap content
delivery network APIs. They follow all of our other best-practices,
but there is apparently no practical open source CDN solution.
OpenCDN was mentioned, but it seems dead.

This doesn't surprise me, since most of the "value-add" from a CDN is in the "we have points of presence and peering agreements" and less about "we use software XYZ". Take, for instance, a typical CDN pricing structure. You pay tiered on some combination bandwidth used and the number of points-of-presence you want your content served from.

Open source projects aren't usually set up around making big capital expenditures and buying lots of bandwidth, so the lack of a FOSS CDN isn't exactly Poppy's fault.

In the absence of any open source solution, the Poppy service is
only useful when connected to commercial services. The Poppy team
has provided drivers for several of these (I see akamai, cloudfront,
fastly, and maxcdn in their "providers" package). Stackalytics shows
the contributors on the team are mostly from Rackspace[2].  I'm not
aware of Rackspace owning any of those services, though I'm sure
they have relationships with one more more.

Even in the absence of an open source CDN, I struggle to call poppy "open core" in the spirit of "you must buy our enterprise plan" because it still doesn't require paying any single vendor. I can still choose the commodity (bandwidth, datacenter locations, etc) provider.

This actually proves my point about the community/TC being intelligent and being able to make decisions about this sort of thing. A written rule about open core might mean we couldn't allow Poppy into the tent, but because we have the broader 4-Opens we can look at it and say "yeah, it requires you pay for a service, but it's clearly enabling many vendors and making it EASIER to switch, which is great."

My understanding of the "no open core" requirement is about the
intent of the contributor.  We don't want separate community and
"enterprise" editions of components (services or drivers).  The
Poppy situation doesn't seem to be a case of open washing anything,
or holding back features in order to sell a more advanced version.
It happens that for Poppy to be useful, you have to buy another
service for it to talk to (and to serve your data), but all of the
Poppy code is actually open and there are several services to choose
from.  There is no "better" version of Poppy available for sale,
if you buy a PoppyCDN subscription.

So, is Poppy "open core"?

Doug

[1] https://review.openstack.org/#/c/273756/
[2] 
http://stackalytics.com/?project_type=all&release=all&module=poppy&metric=commits

I'd say no, Poppy is an open source project/product that makes it easier to use the different vendors of commodity services, and that's not a reason to boot it.

Sidebar: I recall a few years ago that Rackspace's CDN offering was based on Akamai, so read into that whatever you want I guess. They at least used to have a relation with Akamai at some point.

--
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to