a2l007 commented on a change in pull request #10830:
URL: https://github.com/apache/druid/pull/10830#discussion_r571154198
##########
File path:
extensions-core/hdfs-storage/src/main/java/org/apache/druid/inputsource/hdfs/HdfsInputSource.java
##########
@@ -101,20 +102,25 @@ public HdfsInputSource(
return paths;
}
- public static Collection<Path> getPaths(List<String> inputPaths,
Configuration configuration) throws IOException
+ public static Collection<Path> getPaths(List<String> inputPathStrings,
Configuration configuration) throws IOException
{
- if (inputPaths.isEmpty()) {
+ if (inputPathStrings.isEmpty()) {
return Collections.emptySet();
}
// Use FileInputFormat to read splits. To do this, we need to make a fake
Job.
Job job = Job.getInstance(configuration);
// Add paths to the fake JobContext.
- for (String inputPath : inputPaths) {
+ for (String inputPath : inputPathStrings) {
FileInputFormat.addInputPaths(job, inputPath);
}
+ final Path[] inputPaths = FileInputFormat.getInputPaths(job);
+ if (Arrays.stream(inputPaths).anyMatch(path ->
!"hdfs".equalsIgnoreCase(path.toUri().getScheme()))) {
+ throw new IllegalArgumentException("Input paths must be the HDFS path");
+ }
+
Review comment:
@jihoonson Technically `WebHdfsFileSystem` did work with this
inputsource before, and so it could break ingestion pipelines for operators
relying on HDFS Inputsource with `webhdfs` scheme. Could you please comment on
what is the motivation behind restricting to only `hdfs`?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]