SemyonSinchenko commented on issue #570:
URL: 
https://github.com/apache/incubator-graphar/issues/570#issuecomment-2260390498

   > So I think we can add some guide to document to tell the user that if they 
want to use s3, they need to add hadoop-aws by themselves?
   
   What do you think about reference to the documentation of Apache Spark 
itself? Because otherwise it may be confusing. Most of spark distributions 
(like Databricks Runtime, Microsoft Fabric, EMR, Cludera Spark, etc.) already 
contains all the dependencies needed for integration with the corresponding 
cloud provider. This dependencies are often proprietary and adding an OSS 
`hadoop-aws` may tend to unpredictable behaviour.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to