Hi Martijn,
I agree with your opinion, one must consider it well whether it's a good
tradeoff.
In my view adding an extra load directory would worth because it's
relatively small change/risk (+ structural solution is going this direction)
but enforcing HDFS lib is not such good bandaid.
No matter
Hi Gabor,
They are absolutely not coupled, I'm merely mentioning it because it could
"waste" potentially someone's time: first by creating a bandaid which would
then need to be replaced with a more permanent solution. Could be fine, but
it all depends on how much effort & errorprone the bandaid is
Hi Martijn,
> I'm not sure that creating another bandaid is a good idea.
I think we have multiple issues here which are not coupled:
* Hadoop config handling in Flink is not consistent (for example runtime
uses hadoop FS connector code and each connector has it's own
implementation)
* S3 connecto
Hi all,
I have been thinking that we should consider creating one new, rock solid
S3 connector for Flink. I think it's confusing for users that there is an
S3 Presto and an S3 Hadoop implementation, which both are not perfect. I'm
not sure that creating another bandaid is a good idea.
I'm not su
Thanks for the answer Gabor!
Just for the sake of clarity:
- The issue is that the `flink-s3-fs-hadoop` does not even read the
`core-site.xml` if it is not on the classpath
Do I understand correctly that the proposal is:
- Write a new `getHadoopConfiguration` method somewhere without using the
de
Hi Peter,
> would this cause issues for the users?
I think yes, it is going to make trouble for users who want to use S3
without HDFS client.
Adding HDFS client may happen but enforcing it is not a good direction.
As mentioned I've realized that we have 6 different ways how Hadoop conf is
loaded