GitHub user xuchuanyin opened a pull request:

    https://github.com/apache/carbondata/pull/2056

    [CARBONDATA-2238][DataLoad] Merge and spill in-memory pages if memory is 
not enough

    Currently in carbondata, pages will be added to memory. If memory is not 
enough, newly incoming pages will be spilled to disk directly. This 
implementation will merge&spill the in-memory pages and make room for the newly 
incoming pages.
    
    As a result, carbondata will spill less than before and spill bigger files 
instead of smaller files and the merge&sort of the pages is in-memory instead 
of spilled-file.
    
    Be sure to do all of the following checklist to help us incorporate 
    your contribution quickly and easily:
    
     - [x] Any interfaces changed?
     `NO`
     - [x] Any backward compatibility impacted?
      `NO`
     - [x] Document update required?
     `NO`
     - [x] Testing done
            Please provide details on 
            - Whether new unit test cases have been added or why no new tests 
are required?
     `NO`
            - How it is tested? Please attach test report.
     `Tested in local machine`
            - Is it a performance related change? Please attach the performance 
test report.
    `Performance should be enhanced. We reduce the frequency of spill pages to 
disk and increase the data size for each spill. Also, the merge sort of pages 
is memory based instead of file based, so the performance will be enhanced.`
            - Any additional information to help reviewers in testing this 
change.
           
     - [x] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 
    


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/xuchuanyin/carbondata 
0313_spill_inmemory_pages

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/carbondata/pull/2056.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #2056
    
----
commit da3329ca60250e07b568f905ba5cdaa115f8d522
Author: xuchuanyin <xuchuanyin@...>
Date:   2018-03-12T12:25:17Z

    Fix exhausted memory problem during unsafe data loading
    
    If the size of unsafe row page equals to that of working memory, the
    last page will exhaust the working memory and carbondata will suffer 'not
    enough memory' problem when we convert data to columnar format.
    
    All unsafe pages should be spilled to disk or moved to unsafe sort memory
    instead of be kept in unsafe working memory.

commit 16b376eb8a077440db884649744b6cd22bba95d5
Author: xuchuanyin <xuchuanyin@...>
Date:   2018-03-13T02:31:36Z

    Merge and spill in-memory pages if memory is not enough
    
    Currently in carbondata, pages will be added to memory. If memory is not
    enough, newly incoming pages will be spilled to disk directly. This
    implementation will merge&spill the in-memory pages and make room for
    the newly incoming pages.
    As a result, carbondata will spill less than before and spill bigger files
    instead of smaller files.

----


---

Reply via email to