charlesconnell commented on code in PR #7129:
URL: https://github.com/apache/hbase/pull/7129#discussion_r2175489930


##########
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java:
##########
@@ -800,29 +857,63 @@ public List<InputSplit> getSplits(JobContext context) 
throws IOException, Interr
         conf.setInt(MR_NUM_MAPS, mappers);
       }
 
-      List<List<Pair<SnapshotFileInfo, Long>>> groups = 
getBalancedSplits(snapshotFiles, mappers);
-      List<InputSplit> splits = new ArrayList(groups.size());
-      for (List<Pair<SnapshotFileInfo, Long>> files : groups) {
-        splits.add(new ExportSnapshotInputSplit(files));
+      Class<? extends CustomFileGrouper> inputFileGrouperClass = conf.getClass(
+        CONF_INPUT_FILE_GROUPER_CLASS, NoopCustomFileGrouper.class, 
CustomFileGrouper.class);
+      CustomFileGrouper customFileGrouper =
+        ReflectionUtils.newInstance(inputFileGrouperClass, conf);
+      Collection<Collection<Pair<SnapshotFileInfo, Long>>> groups =
+        customFileGrouper.getGroupedInputFiles(snapshotFiles);
+
+      LOG.info("CustomFileGrouper {} split input files into {} groups", 
inputFileGrouperClass,
+        groups.size());
+      int mappersPerGroup = groups.isEmpty() ? 1 : Math.max(mappers / 
groups.size(), 1);

Review Comment:
   The check is needed in order to handle ExportSnapshot runs with no files. 
This is a niche use-case, but it worked before this PR, so I'd like to keep it 
working in the same manner (launching a job with no mappers). I'm don't love 
launching a job with no mappers, but I thought changing that was out of scope 
of this PR.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to