jihoonson commented on a change in pull request #10830:
URL: https://github.com/apache/druid/pull/10830#discussion_r574986869
##########
File path: docs/ingestion/native-batch.md
##########
@@ -1553,6 +1557,11 @@ Note that prefetching or caching isn't that useful in
the Parallel task.
|fetchTimeout|Timeout for fetching each file.|60000|
|maxFetchRetry|Maximum number of retries for fetching each file.|3|
+You can also ingest from other storage using the HDFS firehose if the HDFS
client supports that storage.
+However, if you want to ingest from cloud storage, consider using the proper
input sources for them.
Review comment:
Same here.
##########
File path: docs/ingestion/native-batch.md
##########
@@ -1127,9 +1127,10 @@ Sample specs:
|type|This should be `hdfs`.|None|yes|
|paths|HDFS paths. Can be either a JSON array or comma-separated string of
paths. Wildcards like `*` are supported in these paths. Empty files located
under one of the given paths will be skipped.|None|yes|
-You can also ingest from cloud storage using the HDFS input source.
-However, if you want to read from AWS S3 or Google Cloud Storage, consider
using
-the [S3 input source](#s3-input-source) or the [Google Cloud Storage input
source](#google-cloud-storage-input-source) instead.
+You can also ingest from other storage using the HDFS input source if the HDFS
client supports that storage.
+However, if you want to ingest from cloud storage, consider using the proper
input sources for them.
Review comment:
You can read from not only cloud storage but any storage if it's
supported by the HDFS client. I changed to `the service-specific input source
for your data storage`.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]