suneet-s commented on a change in pull request #10830:
URL: https://github.com/apache/druid/pull/10830#discussion_r569543791
##########
File path:
extensions-core/hdfs-storage/src/main/java/org/apache/druid/inputsource/hdfs/HdfsInputSource.java
##########
@@ -101,20 +102,25 @@ public HdfsInputSource(
return paths;
}
- public static Collection<Path> getPaths(List<String> inputPaths,
Configuration configuration) throws IOException
+ public static Collection<Path> getPaths(List<String> inputPathStrings,
Configuration configuration) throws IOException
{
- if (inputPaths.isEmpty()) {
+ if (inputPathStrings.isEmpty()) {
return Collections.emptySet();
}
// Use FileInputFormat to read splits. To do this, we need to make a fake
Job.
Job job = Job.getInstance(configuration);
// Add paths to the fake JobContext.
- for (String inputPath : inputPaths) {
+ for (String inputPath : inputPathStrings) {
FileInputFormat.addInputPaths(job, inputPath);
}
+ final Path[] inputPaths = FileInputFormat.getInputPaths(job);
+ if (Arrays.stream(inputPaths).anyMatch(path ->
!"hdfs".equalsIgnoreCase(path.toUri().getScheme()))) {
+ throw new IllegalArgumentException("Input paths must be the HDFS path");
+ }
+
Review comment:
Are there users that rely on this behavior? Would restricting this to
just hdfs mean it's not possible for some users to ingest files from these
locations any more? If that's the case, should we introduce a config to allow
server admins to specify which schemes are supported?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]