[GitHub] carbondata pull request #2052: [CARBONDATA-2246][DataLoad] Fix exhausted mem...

2018-04-08 Thread xuchuanyin
Github user xuchuanyin closed the pull request at:

https://github.com/apache/carbondata/pull/2052


---


[GitHub] carbondata pull request #2052: [CARBONDATA-2246][DataLoad] Fix exhausted mem...

2018-03-13 Thread xuchuanyin
Github user xuchuanyin commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2052#discussion_r174364965
  
--- Diff: 
processing/src/main/java/org/apache/carbondata/processing/loading/sort/unsafe/UnsafeSortDataRows.java
 ---
@@ -218,50 +213,45 @@ public void addRow(Object[] row) throws 
CarbonSortKeyAndGroupByException {
   rowPage.addRow(row, rowBuffer.get());
 } else {
   try {
-if (enableInMemoryIntermediateMerge) {
-  
unsafeInMemoryIntermediateFileMerger.startInmemoryMergingIfPossible();
-}
-unsafeInMemoryIntermediateFileMerger.startFileMergingIfPossible();
-semaphore.acquire();
-dataSorterAndWriterExecutorService.submit(new 
DataSorterAndWriter(rowPage));
+handlePreviousPage();
 rowPage = createUnsafeRowPage();
 rowPage.addRow(row, rowBuffer.get());
   } catch (Exception e) {
 LOGGER.error(
 "exception occurred while trying to acquire a semaphore lock: 
" + e.getMessage());
 throw new CarbonSortKeyAndGroupByException(e);
   }
-
 }
   }
 
   /**
-   * Below method will be used to start storing process This method will 
get
-   * all the temp files present in sort temp folder then it will create the
-   * record holder heap and then it will read first record from each file 
and
-   * initialize the heap
+   * Below method will be used to start sorting process. This method will 
get
+   * all the temp unsafe pages in memory and all the temp files and try to 
merge them if possible.
+   * Also, it will spill the pages to disk or add it to unsafe sort memory.
*
-   * @throws InterruptedException
+   * @throws CarbonSortKeyAndGroupByException if error occurs during 
in-memory merge
+   * @throws InterruptedException if error occurs during data sort and 
write
*/
-  public void startSorting() throws InterruptedException {
+  public void startSorting() throws CarbonSortKeyAndGroupByException, 
InterruptedException {
 LOGGER.info("Unsafe based sorting will be used");
 if (this.rowPage.getUsedSize() > 0) {
-  TimSort timSort = new TimSort<>(
-  new UnsafeIntSortDataFormat(rowPage));
-  if (parameters.getNumberOfNoDictSortColumns() > 0) {
-timSort.sort(rowPage.getBuffer(), 0, 
rowPage.getBuffer().getActualSize(),
-new UnsafeRowComparator(rowPage));
-  } else {
-timSort.sort(rowPage.getBuffer(), 0, 
rowPage.getBuffer().getActualSize(),
-new UnsafeRowComparatorForNormalDims(rowPage));
-  }
-  unsafeInMemoryIntermediateFileMerger.addDataChunkToMerge(rowPage);
+  handlePreviousPage();
 } else {
   rowPage.freeMemory();
 }
 startFileBasedMerge();
   }
 
+  private void handlePreviousPage()
--- End diff --

fixed


---


[GitHub] carbondata pull request #2052: [CARBONDATA-2246][DataLoad] Fix exhausted mem...

2018-03-13 Thread jackylk
Github user jackylk commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2052#discussion_r174195929
  
--- Diff: 
processing/src/main/java/org/apache/carbondata/processing/loading/sort/unsafe/UnsafeSortDataRows.java
 ---
@@ -218,50 +213,45 @@ public void addRow(Object[] row) throws 
CarbonSortKeyAndGroupByException {
   rowPage.addRow(row, rowBuffer.get());
 } else {
   try {
-if (enableInMemoryIntermediateMerge) {
-  
unsafeInMemoryIntermediateFileMerger.startInmemoryMergingIfPossible();
-}
-unsafeInMemoryIntermediateFileMerger.startFileMergingIfPossible();
-semaphore.acquire();
-dataSorterAndWriterExecutorService.submit(new 
DataSorterAndWriter(rowPage));
+handlePreviousPage();
 rowPage = createUnsafeRowPage();
 rowPage.addRow(row, rowBuffer.get());
   } catch (Exception e) {
 LOGGER.error(
 "exception occurred while trying to acquire a semaphore lock: 
" + e.getMessage());
 throw new CarbonSortKeyAndGroupByException(e);
   }
-
 }
   }
 
   /**
-   * Below method will be used to start storing process This method will 
get
-   * all the temp files present in sort temp folder then it will create the
-   * record holder heap and then it will read first record from each file 
and
-   * initialize the heap
+   * Below method will be used to start sorting process. This method will 
get
+   * all the temp unsafe pages in memory and all the temp files and try to 
merge them if possible.
+   * Also, it will spill the pages to disk or add it to unsafe sort memory.
*
-   * @throws InterruptedException
+   * @throws CarbonSortKeyAndGroupByException if error occurs during 
in-memory merge
+   * @throws InterruptedException if error occurs during data sort and 
write
*/
-  public void startSorting() throws InterruptedException {
+  public void startSorting() throws CarbonSortKeyAndGroupByException, 
InterruptedException {
 LOGGER.info("Unsafe based sorting will be used");
 if (this.rowPage.getUsedSize() > 0) {
-  TimSort timSort = new TimSort<>(
-  new UnsafeIntSortDataFormat(rowPage));
-  if (parameters.getNumberOfNoDictSortColumns() > 0) {
-timSort.sort(rowPage.getBuffer(), 0, 
rowPage.getBuffer().getActualSize(),
-new UnsafeRowComparator(rowPage));
-  } else {
-timSort.sort(rowPage.getBuffer(), 0, 
rowPage.getBuffer().getActualSize(),
-new UnsafeRowComparatorForNormalDims(rowPage));
-  }
-  unsafeInMemoryIntermediateFileMerger.addDataChunkToMerge(rowPage);
+  handlePreviousPage();
 } else {
   rowPage.freeMemory();
 }
 startFileBasedMerge();
   }
 
+  private void handlePreviousPage()
--- End diff --

can you provide comment for this function


---


[GitHub] carbondata pull request #2052: [CARBONDATA-2246][DataLoad] Fix exhausted mem...

2018-03-12 Thread xuchuanyin
GitHub user xuchuanyin opened a pull request:

https://github.com/apache/carbondata/pull/2052

[CARBONDATA-2246][DataLoad] Fix exhausted memory problem during unsafe data 
loading

If the size of unsafe row page equals to that of working memory, the
last page will exhaust the working memory and carbondata will suffer 'not
enough memory' problem when we convert data to columnar format.

All unsafe pages should be spilled to disk or moved to unsafe sort memory
instead of be kept in unsafe working memory.

Be sure to do all of the following checklist to help us incorporate 
your contribution quickly and easily:

 - [x] Any interfaces changed?
 `NO`
 - [x] Any backward compatibility impacted?
  `NO`
 - [x] Document update required?
 `NO`
 - [x] Testing done
Please provide details on 
- Whether new unit test cases have been added or why no new tests 
are required?
`Tests added`
- How it is tested? Please attach test report.
`Tested in local machine`
- Is it a performance related change? Please attach the performance 
test report.
`NO`
- Any additional information to help reviewers in testing this 
change.
   
 - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xuchuanyin/carbondata 0312_bug_unsafe_memory

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/2052.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2052


commit da3329ca60250e07b568f905ba5cdaa115f8d522
Author: xuchuanyin 
Date:   2018-03-12T12:25:17Z

Fix exhausted memory problem during unsafe data loading

If the size of unsafe row page equals to that of working memory, the
last page will exhaust the working memory and carbondata will suffer 'not
enough memory' problem when we convert data to columnar format.

All unsafe pages should be spilled to disk or moved to unsafe sort memory
instead of be kept in unsafe working memory.




---