danny0405 commented on code in PR #11914:
URL: https://github.com/apache/hudi/pull/11914#discussion_r1769517298


##########
hudi-hadoop-mr/src/main/java/org/apache/hudi/hadoop/hive/HoodieCombineHiveInputFormat.java:
##########
@@ -375,6 +388,38 @@ public InputSplit[] getSplits(JobConf job, int numSplits) 
throws IOException {
     // clear work from ThreadLocal after splits generated in case of thread is 
reused in pool.
     Utilities.clearWorkMapForConf(job);
 
+    // build internal schema for the query
+    if (result.size() > 0) {
+      ArrayList<String> uniqTablePaths = new ArrayList<>();
+      Arrays.stream(paths).forEach(path -> {
+        HoodieStorage storage = null;
+        try {
+          storage = new HoodieHadoopStorage(path.getFileSystem(job));
+          Option<StoragePath> tablePath = TablePathUtils.getTablePath(storage, 
HadoopFSUtils.convertToStoragePath(path));
+          if (tablePath.isPresent()) {
+            uniqTablePaths.add(tablePath.get().toUri().toString());
+          }
+        } catch (IOException e) {
+          throw new RuntimeException(e);
+        }
+      });
+
+      try {
+        for (String path : uniqTablePaths) {
+          HoodieTableMetaClient metaClient = 
HoodieTableMetaClient.builder().setBasePath(path).setConf(new 
HadoopStorageConfiguration(job)).build();
+          TableSchemaResolver schemaUtil = new TableSchemaResolver(metaClient);
+          Option<InternalSchema> schema = 
schemaUtil.getTableInternalSchemaFromCommitMetadata();
+          if (schema.isPresent()) {
+            LOG.info("Set internal schema and avro schema of path: " + 
path.toString());
+            job.set(INTERNAL_SCHEMA_CACHE_KEY_PREFIX + "." + path, 
SerDeHelper.toJson(schema.get()));
+            job.set(SCHEMA_CACHE_KEY_PREFIX + "." + path, 
schemaUtil.getTableAvroSchema().toString());

Review Comment:
   We cache the schema by path or by table name? The table name looks more 
straight-forward. And the schemaUtil.getTableAvroSchema() should be invoked 
first to set up the commit metadata cache in `TableSchemaResolver` to avoid 
redundant commit metadata deserialization.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to