Hi all,

After reviewing and testing RC1, the community has fixed multiple bugs and
improved the documentation. Thanks for the efforts, everyone!
Even though there are known issues in RC2 now, we can still test it and
find more potential issues as early as possible.

Changes after RC1

   - Updates AuthEngine to pass the correct SecretKeySpec format
   
<https://github.com/apache/spark/commit/243bfafd5cb58c1d3ae6c2a1a9e2c14c3a13526c>

   - [
   
<https://github.com/apache/spark/commit/bdd3b490263405a45537b406e20d1877980ab372>
   SPARK-36552 <https://issues.apache.org/jira/browse/SPARK-36552>][SQL]
   Fix different behavior for writing char/varchar to hive and datasource table
   
<https://github.com/apache/spark/commit/bdd3b490263405a45537b406e20d1877980ab372>

   - [
   
<https://github.com/apache/spark/commit/36df86c0d058977f0f202abd0106881474f18f0e>
   SPARK-36564 <https://issues.apache.org/jira/browse/SPARK-36564>][CORE]
   Fix NullPointerException in LiveRDDDistribution.toApi
   
<https://github.com/apache/spark/commit/36df86c0d058977f0f202abd0106881474f18f0e>
   - Revert "[
   
<https://github.com/apache/spark/commit/5463caac0d51d850166e09e2a33e55e213ab5752>
   SPARK-34415 <https://issues.apache.org/jira/browse/SPARK-34415>][ML]
   Randomization in hyperparameter optimization"
   
<https://github.com/apache/spark/commit/5463caac0d51d850166e09e2a33e55e213ab5752>

   - [
   
<https://github.com/apache/spark/commit/fb38887e001d33adef519d0288bd0844dcfe2bd5>
   SPARK-36398 <https://issues.apache.org/jira/browse/SPARK-36398>][SQL]
   Redact sensitive information in Spark Thrift Server log
   
<https://github.com/apache/spark/commit/fb38887e001d33adef519d0288bd0844dcfe2bd5>
   - [
   
<https://github.com/apache/spark/commit/c21303f02c582e97fefc130415e739ddda8dd43e>
   SPARK-36594 <https://issues.apache.org/jira/browse/SPARK-36594>][SQL][3.2]
   ORC vectorized reader should properly check maximal number of fields
   
<https://github.com/apache/spark/commit/c21303f02c582e97fefc130415e739ddda8dd43e>
   - [
   
<https://github.com/apache/spark/commit/93f2b00501c7fad20fb6bc130b548cb87e9f91f1>
   SPARK-36509 <https://issues.apache.org/jira/browse/SPARK-36509>][CORE]
   Fix the issue that executors are never re-scheduled if the worker stops
   with standalone cluster
   
<https://github.com/apache/spark/commit/93f2b00501c7fad20fb6bc130b548cb87e9f91f1>
   - [SPARK-36367 <https://issues.apache.org/jira/browse/SPARK-36367>] Fix
   the behavior to follow pandas >= 1.3
   - Many documentation improvements


Known Issues after RC2 cut

   - PARQUET-2078 <https://issues.apache.org/jira/browse/PARQUET-2078>: Failed
   to read parquet file after writing with the same parquet version if
   `spark.sql.hive.convertMetastoreParquet` is false
   - SPARK-36629 <https://issues.apache.org/jira/browse/SPARK-36629>:
   Upgrade aircompressor to 1.21


Thanks,
Gengliang

On Wed, Sep 1, 2021 at 3:07 PM Gengliang Wang <ltn...@gmail.com> wrote:

> Please vote on releasing the following candidate as
> Apache Spark version 3.2.0.
>
> The vote is open until 11:59pm Pacific time September 3 and passes if a
> majority +1 PMC votes are cast, with a minimum of 3 +1 votes.
>
> [ ] +1 Release this package as Apache Spark 3.2.0
> [ ] -1 Do not release this package because ...
>
> To learn more about Apache Spark, please see http://spark.apache.org/
>
> The tag to be voted on is v3.2.0-rc2 (commit
> 6bb3523d8e838bd2082fb90d7f3741339245c044):
> https://github.com/apache/spark/tree/v3.2.0-rc2
>
> The release files, including signatures, digests, etc. can be found at:
> https://dist.apache.org/repos/dist/dev/spark/v3.2.0-rc2-bin/
>
> Signatures used for Spark RCs can be found in this file:
> https://dist.apache.org/repos/dist/dev/spark/KEYS
>
> The staging repository for this release can be found at:
> https://repository.apache.org/content/repositories/orgapachespark-1389
>
> The documentation corresponding to this release can be found at:
> https://dist.apache.org/repos/dist/dev/spark/v3.2.0-rc2-docs/
>
> The list of bug fixes going into 3.2.0 can be found at the following URL:
> https://issues.apache.org/jira/projects/SPARK/versions/12349407
>
> This release is using the release script of the tag v3.2.0-rc2.
>
>
> FAQ
>
> =========================
> How can I help test this release?
> =========================
> If you are a Spark user, you can help us test this release by taking
> an existing Spark workload and running on this release candidate, then
> reporting any regressions.
>
> If you're working in PySpark you can set up a virtual env and install
> the current RC and see if anything important breaks, in the Java/Scala
> you can add the staging repository to your projects resolvers and test
> with the RC (make sure to clean up the artifact cache before/after so
> you don't end up building with a out of date RC going forward).
>
> ===========================================
> What should happen to JIRA tickets still targeting 3.2.0?
> ===========================================
> The current list of open tickets targeted at 3.2.0 can be found at:
> https://issues.apache.org/jira/projects/SPARK and search for "Target
> Version/s" = 3.2.0
>
> Committers should look at those and triage. Extremely important bug
> fixes, documentation, and API tweaks that impact compatibility should
> be worked on immediately. Everything else please retarget to an
> appropriate release.
>
> ==================
> But my bug isn't fixed?
> ==================
> In order to make timely releases, we will typically not hold the
> release unless the bug in question is a regression from the previous
> release. That being said, if there is something which is a regression
> that has not been correctly targeted please ping me or a committer to
> help target the issue.
>

Reply via email to