jonvex commented on code in PR #10137:
URL: https://github.com/apache/hudi/pull/10137#discussion_r1408353113


##########
hudi-client/hudi-spark-client/src/main/scala/org/apache/hudi/SparkFileFormatInternalRowReaderContext.scala:
##########
@@ -77,14 +78,18 @@ class 
SparkFileFormatInternalRowReaderContext(baseFileReader: Option[Partitioned
           }
         }).asInstanceOf[ClosableIterator[InternalRow]]
     } else {
-      if (baseFileReader.isEmpty) {
-        throw new IllegalArgumentException("Base file reader is missing when 
instantiating "
-          + "SparkFileFormatInternalRowReaderContext.");
+      val key = generateKey(dataSchema, requiredSchema)
+      if (!readerMaps.contains(key)) {
+        throw new IllegalStateException("schemas don't hash to a known reader")
       }
-      new CloseableInternalRowIterator(baseFileReader.get.apply(fileInfo))
+      new CloseableInternalRowIterator(readerMaps(key).apply(fileInfo))
     }
   }
 
+  private def generateKey(dataSchema: Schema, requestedSchema: Schema): Long = 
{

Review Comment:
   The only time it would matter is if there are pushdown filters. This is just 
a temporary hacky solution for now. We will eventually build the reader inside 
of getFileRecordIterator. We need to incorporate an abstraction for filters 
when we do that



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to