Mark, if this goal is adopted, "we" is the Apache Spark community.

On Thu, Feb 28, 2019 at 9:52 AM Mark Hamstra <m...@clearstorydata.com>
wrote:

> Who is "we" in these statements, such as "we should consider a functional
> DSv2 implementation a blocker for Spark 3.0"? If it means those
> contributing to the DSv2 effort want to set their own goals, milestones,
> etc., then that is fine with me. If you mean that the Apache Spark project
> should officially commit to the lack of a functional DSv2 implementation
> being a blocker for the release of Spark 3.0, then I'm -1. A major release
> is just not about adding new features. Rather, it is about making changes
> to the existing public API. As such, I'm opposed to any new feature or any
> API addition being considered a blocker of the 3.0.0 release.
>
>
> On Thu, Feb 28, 2019 at 9:09 AM Matt Cheah <mch...@palantir.com> wrote:
>
>> +1 (non-binding)
>>
>>
>>
>> Are identifiers and namespaces going to be rolled under one of those six
>> points?
>>
>>
>>
>> *From: *Ryan Blue <rb...@netflix.com.INVALID>
>> *Reply-To: *"rb...@netflix.com" <rb...@netflix.com>
>> *Date: *Thursday, February 28, 2019 at 8:39 AM
>> *To: *Spark Dev List <dev@spark.apache.org>
>> *Subject: *[VOTE] Functional DataSourceV2 in Spark 3.0
>>
>>
>>
>> I’d like to call a vote for committing to getting DataSourceV2 in a
>> functional state for Spark 3.0.
>>
>> For more context, please see the discussion thread, but here is a quick
>> summary about what this commitment means:
>>
>> ·         We think that a “functional DSv2” is an achievable goal for
>> the Spark 3.0 release
>>
>> ·         We will consider this a blocker for Spark 3.0, and take
>> reasonable steps to make it happen
>>
>> ·         We will *not* delay the release without a community discussion
>>
>> Here’s what we’ve defined as a functional DSv2:
>>
>> ·         Add a plugin system for catalogs
>>
>> ·         Add an interface for table catalogs (see the ongoing SPIP vote)
>>
>> ·         Add an implementation of the new interface that calls
>> SessionCatalog to load v2 tables
>>
>> ·         Add a resolution rule to load v2 tables from the v2 catalog
>>
>> ·         Add CTAS logical and physical plan nodes
>>
>> ·         Add conversions from SQL parsed plans to v2 logical plans
>> (e.g., INSERT INTO support)
>>
>> Please vote in the next 3 days on whether you agree with committing to
>> this goal.
>>
>> [ ] +1: Agree that we should consider a functional DSv2 implementation a
>> blocker for Spark 3.0
>> [ ] +0: . . .
>> [ ] -1: I disagree with this goal because . . .
>>
>> Thank you!
>>
>> --
>>
>> Ryan Blue
>>
>> Software Engineer
>>
>> Netflix
>>
>

-- 
Ryan Blue
Software Engineer
Netflix

Reply via email to