ravipesala commented on a change in pull request #3102: [CARBONDATA-3272]fix 
ArrayIndexOutOfBoundsException of horizontal compaction during update, when 
cardinality changes within a segment
URL: https://github.com/apache/carbondata/pull/3102#discussion_r251421974
 
 

 ##########
 File path: 
processing/src/main/java/org/apache/carbondata/processing/merger/CarbonCompactionExecutor.java
 ##########
 @@ -139,26 +138,69 @@ public CarbonCompactionExecutor(Map<String, 
TaskBlockInfo> segmentMapping,
           CarbonCompactionUtil.isRestructured(listMetadata, 
carbonTable.getTableLastUpdatedTime())
               || !CarbonCompactionUtil.isSorted(listMetadata.get(0));
       for (String task : taskBlockListMapping) {
-        list = taskBlockInfo.getTableBlockInfoList(task);
-        Collections.sort(list);
-        LOGGER.info(
-            "for task -" + task + "- in segment id -" + segmentId + "- block 
size is -" + list
-                .size());
-        queryModel.setTableBlockInfos(list);
-        if (sortingRequired) {
-          resultList.get(CarbonCompactionUtil.UNSORTED_IDX).add(
-              new RawResultIterator(executeBlockList(list, segmentId, task, 
configuration),
-                  sourceSegProperties, destinationSegProperties, false));
-        } else {
-          resultList.get(CarbonCompactionUtil.SORTED_IDX).add(
-              new RawResultIterator(executeBlockList(list, segmentId, task, 
configuration),
-                  sourceSegProperties, destinationSegProperties, false));
+        tableBlockInfos = taskBlockInfo.getTableBlockInfoList(task);
+        // during update there may be a chance that the cardinality may change 
within the segment
+        // which may lead to failure while converting the row, so get all the 
blocks present in a
+        // task and then split into multiple lists of same key length and 
create separate
+        // RawResultIterator for each tableBlockInfo of same key length. If 
all the blocks have same
+        // keylength, then make a single RawResultIterator for all the blocks
+        List<List<TableBlockInfo>> listOfTableBlocksBasedOnKeyLength =
+            getListOfTableBlocksBasedOnKeyLength(tableBlockInfos);
+        for (List<TableBlockInfo> tableBlockInfoList : 
listOfTableBlocksBasedOnKeyLength) {
+          Collections.sort(tableBlockInfoList);
+          LOGGER.info("for task -" + task + "- in segment id -" + segmentId + 
"- block size is -"
+              + tableBlockInfos.size());
+          queryModel.setTableBlockInfos(tableBlockInfoList);
+          if (sortingRequired) {
+            resultList.get(CarbonCompactionUtil.UNSORTED_IDX).add(
+                getRawResultIterator(configuration, segmentId, task, 
tableBlockInfoList));
+          } else {
+            resultList.get(CarbonCompactionUtil.SORTED_IDX).add(
+                getRawResultIterator(configuration, segmentId, task, 
tableBlockInfoList));
+          }
         }
       }
     }
     return resultList;
   }
 
+  private RawResultIterator getRawResultIterator(Configuration configuration, 
String segmentId,
+      String task, List<TableBlockInfo> tableBlockInfoList)
+      throws QueryExecutionException, IOException {
+    return new RawResultIterator(
+        executeBlockList(tableBlockInfoList, segmentId, task, configuration),
+        getSourceSegmentProperties(
+            
Collections.singletonList(tableBlockInfoList.get(0).getDataFileFooter())),
+        destinationSegProperties, false);
+  }
+
+  /**
+   * This method returns the List of TableBlockInfoList, where each 
listOfTableBlockInfos will have
+   * same keySize
+   * @param tableBlockInfos List of tableBlockInfos present in each task
+   */
+  private List<List<TableBlockInfo>> getListOfTableBlocksBasedOnKeyLength(
+      List<TableBlockInfo> tableBlockInfos) {
+    List<List<TableBlockInfo>> listOfTableBlockInfoListOnKeySize = new 
ArrayList<>();
+    Map<Integer, List<TableBlockInfo>> keySizeToTableBlockInfoMap = new 
HashMap<>();
+    for (TableBlockInfo tableBlock : tableBlockInfos) {
+      // get the keySizeInBytes for the dataFileFooter
+      int keySizeInBytes =
+          
getSourceSegmentProperties(Collections.singletonList(tableBlock.getDataFileFooter()))
+              .getDimensionKeyGenerator().getKeySizeInBytes();
+      List<TableBlockInfo> tempBlockInfoList = 
keySizeToTableBlockInfoMap.get(keySizeInBytes);
 
 Review comment:
   I feel it is not right way to check with keySizeInBytes , it is the total 
size of all columns. It might go wrong in many scenarios. Please consider the 
individual key sizes as key

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to