Hi, All.

After a short celebration of Apache Spark 3.0, I'd like to ask you the
community opinion on Apache Spark 3.1 feature expectations.

First of all, Apache Spark 3.1 is scheduled for December 2020.
- https://spark.apache.org/versioning-policy.html

I'm expecting the following items:

1. Support Scala 2.13
2. Use Apache Hadoop 3.2 by default for better cloud support
3. Declaring Kubernetes Scheduler GA
    In my perspective, the last main missing piece was Dynamic allocation
and
    - Dynamic allocation with shuffle tracking is already shipped at 3.0.
    - Dynamic allocation with worker decommission/data migration is
targeting 3.1. (Thanks, Holden)
4. DSv2 Stabilization

I'm aware of some more features which are on the way currently, but I love
to hear the opinions from the main developers and more over the main users
who need those features.

Thank you in advance. Welcome for any comments.

Bests,
Dongjoon.

Reply via email to