The issue with SPARK-23292 is that we currently run the python tests
related to pandas and pyarrow with python 3 (which is already installed on
all amplab jenkins machines). Since the code path is fully tested, we
decided to not mark it as a blocker; I've reworded the title to better
indicate that.

On 13 February 2018 at 08:16, Sean Owen <sro...@apache.org> wrote:

> +1 from me. Again, licenses and sigs look fine. I built the source
> distribution with "-Phive -Phadoop-2.7 -Pyarn -Pkubernetes" and all tests
> passed.
>
> Remaining issues for 2.3.0, none of which are a Blocker:
>
> SPARK-22797 Add multiple column support to PySpark Bucketizer
> SPARK-23083 Adding Kubernetes as an option to https://spark.apache.org/
> SPARK-23292 python tests related to pandas are skipped
> SPARK-23309 Spark 2.3 cached query performance 20-30% worse then spark 2.2
> SPARK-23316 AnalysisException after max iteration reached for IN query
>
> ... though the pandas tests issue is "Critical".
>
> (SPARK-23083 is an update to the main site that should happen as the
> artifacts are released, so it's OK.)
>
> On Tue, Feb 13, 2018 at 12:30 AM Sameer Agarwal <samee...@apache.org>
> wrote:
>
>> Now that all known blockers have once again been resolved, please vote on
>> releasing the following candidate as Apache Spark version 2.3.0. The vote
>> is open until Friday February 16, 2018 at 8:00:00 am UTC and passes if a
>> majority of at least 3 PMC +1 votes are cast.
>>
>>
>> [ ] +1 Release this package as Apache Spark 2.3.0
>>
>> [ ] -1 Do not release this package because ...
>>
>>
>> To learn more about Apache Spark, please see https://spark.apache.org/
>>
>> The tag to be voted on is v2.3.0-rc3: https://github.com/apache/
>> spark/tree/v2.3.0-rc3 (89f6fcbafcfb0a7aeb897fba6036cb085bd35121)
>>
>> List of JIRA tickets resolved in this release can be found here:
>> https://issues.apache.org/jira/projects/SPARK/versions/12339551
>>
>> The release files, including signatures, digests, etc. can be found at:
>> https://dist.apache.org/repos/dist/dev/spark/v2.3.0-rc3-bin/
>>
>> Release artifacts are signed with the following key:
>> https://dist.apache.org/repos/dist/dev/spark/KEYS
>>
>> The staging repository for this release can be found at:
>> https://repository.apache.org/content/repositories/orgapachespark-1264/
>>
>> The documentation corresponding to this release can be found at:
>> https://dist.apache.org/repos/dist/dev/spark/v2.3.0-rc3-
>> docs/_site/index.html
>>
>>
>> FAQ
>>
>> =======================================
>> What are the unresolved issues targeted for 2.3.0?
>> =======================================
>>
>> Please see https://s.apache.org/oXKi. At the time of writing, there are
>> currently no known release blockers.
>>
>> =========================
>> How can I help test this release?
>> =========================
>>
>> If you are a Spark user, you can help us test this release by taking an
>> existing Spark workload and running on this release candidate, then
>> reporting any regressions.
>>
>> If you're working in PySpark you can set up a virtual env and install the
>> current RC and see if anything important breaks, in the Java/Scala you can
>> add the staging repository to your projects resolvers and test with the RC
>> (make sure to clean up the artifact cache before/after so you don't end up
>> building with a out of date RC going forward).
>>
>> ===========================================
>> What should happen to JIRA tickets still targeting 2.3.0?
>> ===========================================
>>
>> Committers should look at those and triage. Extremely important bug
>> fixes, documentation, and API tweaks that impact compatibility should be
>> worked on immediately. Everything else please retarget to 2.3.1 or 2.4.0 as
>> appropriate.
>>
>> ===================
>> Why is my bug not fixed?
>> ===================
>>
>> In order to make timely releases, we will typically not hold the release
>> unless the bug in question is a regression from 2.2.0. That being said, if
>> there is something which is a regression from 2.2.0 and has not been
>> correctly targeted please ping me or a committer to help target the issue
>> (you can see the open issues listed as impacting Spark 2.3.0 at
>> https://s.apache.org/WmoI).
>>
>>
>> Regards,
>> Sameer
>>
>


-- 
Sameer Agarwal
Computer Science | UC Berkeley
http://cs.berkeley.edu/~sameerag

Reply via email to