There might be other blockers. Lets wait and see. On Tue, May 17, 2022 at 8:59 PM beliefer <belie...@163.com> wrote:
> OK. let it into 3.3.1 > > > 在 2022-05-17 18:59:13,"Hyukjin Kwon" <gurwls...@gmail.com> 写道: > > I think most users won't be affected since aggregate pushdown is disabled > by default. > > On Tue, 17 May 2022 at 19:53, beliefer <belie...@163.com> wrote: > >> If we not contains https://github.com/apache/spark/pull/36556, we will >> break change when we merge it into 3.3.1 >> >> At 2022-05-17 18:26:12, "Hyukjin Kwon" <gurwls...@gmail.com> wrote: >> >> We need add https://github.com/apache/spark/pull/36556 to RC2. >> >> We will likely have to change the version being added if RC2 passes. >> Since this is a new API/improvement, I would prefer to not block the >> release by that. >> >> On Tue, 17 May 2022 at 19:19, beliefer <belie...@163.com> wrote: >> >>> We need add https://github.com/apache/spark/pull/36556 to RC2. >>> >>> >>> 在 2022-05-17 17:37:13,"Hyukjin Kwon" <gurwls...@gmail.com> 写道: >>> >>> That seems like a test-only issue. I made a quick followup at >>> https://github.com/apache/spark/pull/36576. >>> >>> On Tue, 17 May 2022 at 03:56, Sean Owen <sro...@gmail.com> wrote: >>> >>>> I'm still seeing failures related to the function registry, like: >>>> >>>> ExpressionsSchemaSuite: >>>> - Check schemas for expression examples *** FAILED *** >>>> 396 did not equal 398 Expected 396 blocks in result file but got 398. >>>> Try regenerating the result files. (ExpressionsSchemaSuite.scala:161) >>>> >>>> - SPARK-14415: All functions should have own descriptions *** FAILED *** >>>> "Function: bloom_filter_aggClass: >>>> org.apache.spark.sql.catalyst.expressions.aggregate.BloomFilterAggregateUsage: >>>> N/A." contained "N/A." Failed for [function_desc: string] (N/A. existed in >>>> the result) (QueryTest.scala:54) >>>> >>>> There seems to be consistently a difference of 2 in the list of >>>> expected functions and actual. I haven't looked closely, don't know this >>>> code. I'm on Ubuntu 22.04. Anyone else seeing something like this? >>>> Wondering if it's something weird to do with case sensitivity, hidden files >>>> lurking somewhere, etc. >>>> >>>> I suspect it's not a 'real' error as the Linux-based testers work fine, >>>> but I also can't think of why this is failing. >>>> >>>> >>>> >>>> On Mon, May 16, 2022 at 7:44 AM Maxim Gekk >>>> <maxim.g...@databricks.com.invalid> wrote: >>>> >>>>> Please vote on releasing the following candidate as >>>>> Apache Spark version 3.3.0. >>>>> >>>>> The vote is open until 11:59pm Pacific time May 19th and passes if a >>>>> majority +1 PMC votes are cast, with a minimum of 3 +1 votes. >>>>> >>>>> [ ] +1 Release this package as Apache Spark 3.3.0 >>>>> [ ] -1 Do not release this package because ... >>>>> >>>>> To learn more about Apache Spark, please see http://spark.apache.org/ >>>>> >>>>> The tag to be voted on is v3.3.0-rc2 (commit >>>>> c8c657b922ac8fd8dcf9553113e11a80079db059): >>>>> https://github.com/apache/spark/tree/v3.3.0-rc2 >>>>> >>>>> The release files, including signatures, digests, etc. can be found at: >>>>> https://dist.apache.org/repos/dist/dev/spark/v3.3.0-rc2-bin/ >>>>> >>>>> Signatures used for Spark RCs can be found in this file: >>>>> https://dist.apache.org/repos/dist/dev/spark/KEYS >>>>> >>>>> The staging repository for this release can be found at: >>>>> https://repository.apache.org/content/repositories/orgapachespark-1403 >>>>> >>>>> The documentation corresponding to this release can be found at: >>>>> https://dist.apache.org/repos/dist/dev/spark/v3.3.0-rc2-docs/ >>>>> >>>>> The list of bug fixes going into 3.3.0 can be found at the following >>>>> URL: >>>>> https://issues.apache.org/jira/projects/SPARK/versions/12350369 >>>>> >>>>> This release is using the release script of the tag v3.3.0-rc2. >>>>> >>>>> >>>>> FAQ >>>>> >>>>> ========================= >>>>> How can I help test this release? >>>>> ========================= >>>>> If you are a Spark user, you can help us test this release by taking >>>>> an existing Spark workload and running on this release candidate, then >>>>> reporting any regressions. >>>>> >>>>> If you're working in PySpark you can set up a virtual env and install >>>>> the current RC and see if anything important breaks, in the Java/Scala >>>>> you can add the staging repository to your projects resolvers and test >>>>> with the RC (make sure to clean up the artifact cache before/after so >>>>> you don't end up building with a out of date RC going forward). >>>>> >>>>> =========================================== >>>>> What should happen to JIRA tickets still targeting 3.3.0? >>>>> =========================================== >>>>> The current list of open tickets targeted at 3.3.0 can be found at: >>>>> https://issues.apache.org/jira/projects/SPARK and search for "Target >>>>> Version/s" = 3.3.0 >>>>> >>>>> Committers should look at those and triage. Extremely important bug >>>>> fixes, documentation, and API tweaks that impact compatibility should >>>>> be worked on immediately. Everything else please retarget to an >>>>> appropriate release. >>>>> >>>>> ================== >>>>> But my bug isn't fixed? >>>>> ================== >>>>> In order to make timely releases, we will typically not hold the >>>>> release unless the bug in question is a regression from the previous >>>>> release. That being said, if there is something which is a regression >>>>> that has not been correctly targeted please ping me or a committer to >>>>> help target the issue. >>>>> >>>>> Maxim Gekk >>>>> >>>>> Software Engineer >>>>> >>>>> Databricks, Inc. >>>>> >>>> >>> >>> >>> >> >> >> >> > > > >