To your other message: I already see a number of PMC members here. Who's
the other entity? The PMC is the thing that says a thing is a release,
sure, but this discussion is properly a community one. And here we are,
this is lovely to see.

(May I remind everyone to casually, sometime, browse the large list of
other JIRAs targeted for Spark 3? it's much more than DSv2!)

I can't speak to specific decisions here, but, I see:

Spark 3 doesn't have a release date. Notionally it's 6 months after Spark
2.4 (Nov 2018). It'd be reasonable to plan for a little more time. Can we
throw out... June 2019, and I update the website? It can slip but that
gives a concrete timeframe around which to plan. What can comfortably get
in by June 2019?

Agreement that "DSv2" is going into Spark 3, for some definition of DSv2
that's probably roughly Matt's list.

Changes that can't go into a minor release (API changes, etc) must by
definition go into Spark 3.0. Agree those first and do those now. Delay
Spark 3 until they're done and prioritize accordingly.
Changes that can go into a minor release can go into 3.1, if needed.
This has been in discussion long enough that I think whatever design(s) are
on the table for DSv2 now are as close as one is going to get. The perfect
is the enemy of the good.

Aside from throwing out a date, I probably just restated what everyone
said. But I was 'summoned' :)

On Fri, Feb 22, 2019 at 12:40 PM Mark Hamstra <m...@clearstorydata.com>
wrote:

> However, as other people mentioned, Spark 3.0 has many other major
>> features as well
>>
>
> I fundamentally disagree. First, Spark 3.0 has nothing until the PMC says
> it has something, and we have made no commitment along the lines that
> "Spark 3.0.0 will not be released unless it contains new features x, y and
> z." Second, major-version releases are not about adding new features.
> Major-version releases are about making changes to the public API that we
> cannot make in feature or bug-fix releases. If that is all that is
> accomplished in a particular major release, that's fine -- in fact, we
> quite intentionally did not target new features in the Spark 2.0.0 release.
> The fact that some entity other than the PMC thinks that Spark 3.0 should
> contain certain new features or that it will be costly to them if 3.0 does
> not contain those features is not dispositive. If there are public API
> changes that should occur in a timely fashion and there is also a list of
> new features that some users or contributors want to see in 3.0 but that
> look likely to not be ready in a timely fashion, then the PMC should fully
> consider releasing 3.0 without all those new features. There is no reason
> that they can't come in with 3.1.0.
>

Reply via email to