vinothchandar commented on a change in pull request #674: Upgrade to Hive 2.x, 
MOR read query fixes and performance improvement
URL: https://github.com/apache/incubator-hudi/pull/674#discussion_r291869300
 
 

 ##########
 File path: 
hoodie-hadoop-mr/src/main/java/com/uber/hoodie/hadoop/realtime/AbstractRealtimeRecordReader.java
 ##########
 @@ -323,31 +325,40 @@ private static Schema addPartitionFields(Schema schema, 
List<String> partitionin
    * the base split was written.
    */
   private void init() throws IOException {
-    writerSchema = new AvroSchemaConverter().convert(baseFileSchema);
-    List<String> fieldNames = 
writerSchema.getFields().stream().map(Field::name).collect(Collectors.toList());
-    if (split.getDeltaFilePaths().size() > 0) {
-      String logPath = 
split.getDeltaFilePaths().get(split.getDeltaFilePaths().size() - 1);
-      FileSystem fs = FSUtils.getFs(logPath, jobConf);
-      writerSchema = readSchemaFromLogFile(fs, new Path(logPath));
-      fieldNames = 
writerSchema.getFields().stream().map(Field::name).collect(Collectors.toList());
+    Schema schemaFromLogFile = null;
+    HoodieTableMetaClient metaClient = new HoodieTableMetaClient(jobConf, 
split.getBasePath());
+    // Sort the log file paths in reverse order of commitTime & version for 
extra safety
+    List<String> deltaPaths = split.getDeltaFilePaths().stream().map(s -> new 
HoodieLogFile(new Path(s)))
 
 Review comment:
   is there a higher level LogFormat or LogFormatReader api that lets us do 
this without having to deal with each delta file by hand? 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to