GitHub user witgo opened a pull request:

    https://github.com/apache/spark/pull/16806

    [WIP][SPARK-18890][CORE] Move task serialization from the TaskSetManager to 
the CoarseGrainedSchedulerBackend

    ## What changes were proposed in this pull request?
    
    See https://issues.apache.org/jira/browse/SPARK-18890
    
    In the case of stage has a lot of tasks, this PR can improve the scheduling 
performance of ~~15%~~
    
    The test code:
    
    ``` scala
    
    val rdd = sc.parallelize(0 until 100).repartition(100000)
    rdd.localCheckpoint().count()
    rdd.sum()
    (1 to 10).foreach{ i=>
      val serializeStart = System.currentTimeMillis()
      rdd.sum()
      val serializeFinish = System.currentTimeMillis()
      println(f"Test $i: ${(serializeFinish - serializeStart) / 1000D}%1.2f")
    }
    
    ```
    
    and `spark-defaults.conf` file:
    
    ```
    spark.master                                      yarn-client
    spark.executor.instances                          20
    spark.driver.memory                               64g
    spark.executor.memory                             30g
    spark.executor.cores                              5
    spark.default.parallelism                         100 
    spark.sql.shuffle.partitions                      100
    spark.serializer                                  
org.apache.spark.serializer.KryoSerializer
    spark.driver.maxResultSize                        0
    spark.ui.enabled                                  false 
    spark.driver.extraJavaOptions                     -XX:+UseG1GC 
-XX:+UseStringDeduplication -XX:G1HeapRegionSize=16M -XX:MetaspaceSize=512M 
    spark.executor.extraJavaOptions                   -XX:+UseG1GC 
-XX:+UseStringDeduplication -XX:G1HeapRegionSize=16M -XX:MetaspaceSize=256M 
    spark.cleaner.referenceTracking.blocking          true
    spark.cleaner.referenceTracking.blocking.shuffle  true
    
    ```
    
    The test results are as follows
    
    **The table is out of date, to be updated**
    
    | [SPARK-17931](https://github.com/witgo/spark/tree/SPARK-17931) | 
[941b3f9](https://github.com/apache/spark/commit/941b3f9aca59e62c078508a934f8c2221ced96ce)
 |
    | --- | --- |
    | 17.116 s | 21.764 s |
    ## How was this patch tested?
    
    Existing tests.


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/witgo/spark SPARK-18890-2

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/16806.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #16806
    
----
commit ab5a763375e5b9308e55acbedfb1e7bf2cb739de
Author: Guoqiang Li <wi...@qq.com>
Date:   2017-01-08T11:18:59Z

    Move task serialization from the TaskSetManager to the 
CoarseGrainedSchedulerBackend

commit 292a8bcf09fce3826b658c18c5d923379346fe52
Author: Guoqiang Li <wi...@qq.com>
Date:   2017-01-11T06:05:53Z

    review commits

commit 469586efd4abf47a5f891a6a4b72bba83e608aaf
Author: Guoqiang Li <wi...@qq.com>
Date:   2017-01-13T02:10:03Z

    add test "Scheduler aborts stages that have unserializable partition"

commit 8f7edc6c16c25aae6fae4f6dc6fa76eca8f06fd6
Author: Guoqiang Li <wi...@qq.com>
Date:   2017-02-04T14:07:51Z

    Refactor the serialization TaskDescription code

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to