GitHub user zzcclp opened a pull request:

    https://github.com/apache/carbondata/pull/1245

    [CARBONDATA-1366]Change rdd storage level to 'MEMORY_AND_DISK_SER' to 
improve loading performance when sort_scope=global_sort

    My testing env and configs are as followings:
    
    **Env:**
    6 executors, 9G mem + 6 cores per executor 
    
    **Configs:**
    SINGLE_PASS=true
    SORT_SCOPE=GLOBAL_SORT
    spark.memory.fraction=0.5
    
    if using 'convertRDD.persist(StorageLevel.MEMORY_AND_DISK_SER)' in method 
'org.apache.carbondata.spark.load.DataLoadProcessBuilderOnSpark.loadDataUsingGlobalSort',
 it takes about **7.2** min to load 144136697 lines (10.9 G parquet files), and 
if using 'convertRDD.persist(StorageLevel.MEMORY_AND_DISK)', it takes about 
**9.5** min to load 144136697 lines.


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/zzcclp/carbondata CARBONDATA-1366

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/carbondata/pull/1245.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #1245
    
----
commit b50a3bb590d53e7749665e2f20b6a78ca963561f
Author: Zhang Zhichao <441586...@qq.com>
Date:   2017-08-08T09:42:40Z

    [CARBONDATA-1366]Change rdd storage level to 'MEMORY_AND_DISK_SER' to 
improve loading performance when sort_scope=global_sort
    
    Change rdd storage level to 'MEMORY_AND_DISK_SER' to improve loading 
performance when sort_scope=global_sort

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to