Since it’s not a regression from 2.0 (I believe the same issue affects both
2.0 and 2.1) it doesn’t merit a -1 vote according to the voting guidelines.

Of course, it would be nice if we could fix the various optimizer issues
that all seem to have a workaround that involves persist() (another one is
SPARK-18492 <https://issues.apache.org/jira/browse/SPARK-18492>) but I
don’t think this should block the release.
​

On Mon, Dec 19, 2016 at 12:36 PM Franklyn D'souza <
franklyn.dso...@shopify.com> wrote:

> -1 https://issues.apache.org/jira/browse/SPARK-18589 hasn't been resolved
> by this release and is a blocker in our adoption of spark 2.0. I've updated
> the issue with some steps to reproduce the error.
>
> On Mon, Dec 19, 2016 at 4:37 AM, Sean Owen <so...@cloudera.com> wrote:
>
> PS, here are the open issues for 2.1.0. Forgot this one. No Blockers, but
> one "Critical":
>
> SPARK-16845
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificOrdering"
> grows beyond 64 KB
>
> SPARK-18669 Update Apache docs regard watermarking in Structured Streaming
>
> SPARK-18894 Event time watermark delay threshold specified in months or
> years gives incorrect results
>
> SPARK-18899 append data to a bucketed table with mismatched bucketing
> should fail
>
> SPARK-18909 The error message in `ExpressionEncoder.toRow` and `fromRow`
> is too verbose
>
> SPARK-18912 append to a non-file-based data source table should detect
> columns number mismatch
>
> SPARK-18913 append to a table with special column names should work
>
> SPARK-18921 check database existence with Hive.databaseExists instead of
> getDatabase
>
>
> On Fri, Dec 16, 2016 at 5:17 AM Reynold Xin <r...@databricks.com> wrote:
>
> Please vote on releasing the following candidate as Apache Spark version
> 2.1.0. The vote is open until Sun, December 18, 2016 at 21:30 PT and passes
> if a majority of at least 3 +1 PMC votes are cast.
>
> [ ] +1 Release this package as Apache Spark 2.1.0
> [ ] -1 Do not release this package because ...
>
>
> To learn more about Apache Spark, please see http://spark.apache.org/
>
> The tag to be voted on is v2.1.0-rc5
> (cd0a08361e2526519e7c131c42116bf56fa62c76)
>
> List of JIRA tickets resolved are:
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20SPARK%20AND%20fixVersion%20%3D%202.1.0
>
> The release files, including signatures, digests, etc. can be found at:
> http://home.apache.org/~pwendell/spark-releases/spark-2.1.0-rc5-bin/
>
> Release artifacts are signed with the following key:
> https://people.apache.org/keys/committer/pwendell.asc
>
> The staging repository for this release can be found at:
> https://repository.apache.org/content/repositories/orgapachespark-1223/
>
> The documentation corresponding to this release can be found at:
> http://people.apache.org/~pwendell/spark-releases/spark-2.1.0-rc5-docs/
>
>
> *FAQ*
>
> *How can I help test this release?*
>
> If you are a Spark user, you can help us test this release by taking an
> existing Spark workload and running on this release candidate, then
> reporting any regressions.
>
> *What should happen to JIRA tickets still targeting 2.1.0?*
>
> Committers should look at those and triage. Extremely important bug fixes,
> documentation, and API tweaks that impact compatibility should be worked on
> immediately. Everything else please retarget to 2.1.1 or 2.2.0.
>
> *What happened to RC3/RC5?*
>
> They had issues withe release packaging and as a result were skipped.
>
>
>

Reply via email to