We still have several blockers for 2.1, so I imagine at least one will mean
this won't be the final RC:

SPARK-18318 ML, Graph 2.1 QA: API: New Scala APIs, docs
SPARK-18319 ML, Graph 2.1 QA: API: Experimental, DeveloperApi, final,
sealed audit
SPARK-18326 SparkR 2.1 QA: New R APIs and API docs
SPARK-18516 Separate instantaneous state from progress performance
statistics
SPARK-18538 Concurrent Fetching DataFrameReader JDBC APIs Do Not Work
SPARK-18553 Executor loss may cause TaskSetManager to be leaked

However I understand the purpose here is of course to get started testing
early and we should all do so.

BTW here are the Critical issues still open:

SPARK-12347 Write script to run all MLlib examples for testing
SPARK-16032 Audit semantics of various insertion operations related to
partitioned tables
SPARK-17861 Store data source partitions in metastore and push partition
pruning into metastore
SPARK-18091 Deep if expressions cause Generated SpecificUnsafeProjection
code to exceed JVM code size limit
SPARK-18274 Memory leak in PySpark StringIndexer
SPARK-18316 Spark MLlib, GraphX 2.1 QA umbrella
SPARK-18322 ML, Graph 2.1 QA: Update user guide for new features & APIs
SPARK-18323 Update MLlib, GraphX websites for 2.1
SPARK-18324 ML, Graph 2.1 QA: Programming guide update and migration guide
SPARK-18329 Spark R 2.1 QA umbrella
SPARK-18330 SparkR 2.1 QA: Update user guide for new features & APIs
SPARK-18331 Update SparkR website for 2.1
SPARK-18332 SparkR 2.1 QA: Programming guide, migration guide, vignettes
updates
SPARK-18468 Flaky test:
org.apache.spark.sql.hive.HiveSparkSubmitSuite.SPARK-9757 Persist Parquet
relation with decimal column
SPARK-18549 Failed to Uncache a View that References a Dropped Table.
SPARK-18560 Receiver data can not be dataSerialized properly.


On Tue, Nov 29, 2016 at 1:26 AM Reynold Xin <r...@databricks.com> wrote:

> Please vote on releasing the following candidate as Apache Spark version
> 2.1.0. The vote is open until Thursday, December 1, 2016 at 18:00 UTC and
> passes if a majority of at least 3 +1 PMC votes are cast.
>
> [ ] +1 Release this package as Apache Spark 2.1.0
> [ ] -1 Do not release this package because ...
>
>
> To learn more about Apache Spark, please see http://spark.apache.org/
>
> The tag to be voted on is v2.1.0-rc1
> (80aabc0bd33dc5661a90133156247e7a8c1bf7f5)
>
> The release files, including signatures, digests, etc. can be found at:
> http://people.apache.org/~pwendell/spark-releases/spark-2.1.0-rc1-bin/
>
> Release artifacts are signed with the following key:
> https://people.apache.org/keys/committer/pwendell.asc
>
> The staging repository for this release can be found at:
> https://repository.apache.org/content/repositories/orgapachespark-1216/
>
> The documentation corresponding to this release can be found at:
> http://people.apache.org/~pwendell/spark-releases/spark-2.1.0-rc1-docs/
>
>
> =======================================
> How can I help test this release?
> =======================================
> If you are a Spark user, you can help us test this release by taking an
> existing Spark workload and running on this release candidate, then
> reporting any regressions.
>
> ===============================================================
> What should happen to JIRA tickets still targeting 2.1.0?
> ===============================================================
> Committers should look at those and triage. Extremely important bug fixes,
> documentation, and API tweaks that impact compatibility should be worked on
> immediately. Everything else please retarget to 2.1.1 or 2.2.0.
>
>
>

Reply via email to