Github user ajantha-bhat commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2664#discussion_r213593256
--- Diff:
processing/src/main/java/org/apache/carbondata/processing/loading/sort/impl/UnsafeBatchParallelReadMergeSorterImpl.java
---
@@ -62,12 +62,15 @@
private AtomicLong rowCounter;
+ private AtomicInteger batchId;
+
--- End diff --
This issue scenario happens only in spill to disk scenario.
spill to disk happens only when batch size is more than
CCC.IN_MEMORY_STORAGE_FOR_SORTED_DATA_IN_MB.
If I set this property lesser number. It will be overwritten by
CarbonProperties.validateSortMemorySizeInMB() .
so only way to reproduce issue is with huge data. .But adding a testcase
for huge data will slowdown PR builder.
Hence not added.
---