rdblue commented on a change in pull request #2170:
URL: https://github.com/apache/iceberg/pull/2170#discussion_r566260803



##########
File path: flink/src/main/java/org/apache/iceberg/flink/source/DataIterator.java
##########
@@ -71,12 +72,13 @@
 
   InputFile getInputFile(FileScanTask task) {
     Preconditions.checkArgument(!task.isDataTask(), "Invalid task type");
-
-    return inputFiles.get(task.file().path().toString());
+    return getInputFile(task.file().path().toString());
   }
 
   InputFile getInputFile(String location) {
-    return inputFiles.get(location);
+    // normalize the path before looking it up in the map
+    Path path = new Path(location);

Review comment:
       I don't think using the Hadoop API directly is a good way to solve the 
problem. It sounds like we need to fix the keys in the map to match the 
original location from the input split instead.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to