dmvk commented on a change in pull request #17958:
URL: https://github.com/apache/flink/pull/17958#discussion_r784823504
##########
File path:
flink-filesystems/flink-hadoop-fs/src/main/java/org/apache/flink/runtime/fs/hdfs/HadoopFsFactory.java
##########
@@ -75,6 +75,12 @@ public FileSystem create(URI fsUri) throws IOException {
// from here on, we need to handle errors due to missing optional
// dependency classes
try {
+ // -- (0) set hadoop caller context
+
+ if (getCurrent() != null && flinkConfig != null) {
+ HadoopUtils.setCallerContext(getCurrent(), flinkConfig);
Review comment:
It's not that simple. There is a JVM-wide cache for filesystems, so if
multiple tasks are accessing the same FS, it should get initialized only on one
of them.
As I've suggested before, the solution that would cover all corner-cases
(hopefully) would involve re-setting the caller context in each method that can
talk hadoop (create, open, list, ...).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]