Github user steveloughran commented on the issue:
https://github.com/apache/spark/pull/19885
if you make a path of each of these and call getFileSystem() on them, you
will end up with two different FS instances in the same JVM. But they'll both
be talking to the same namenode using the UGI of whoever was the current user
at the time getFileSystem() was called. That is: one cluster
* Nobody should be using user@ in an HDFS URL, it doesn't do anything
* for better or worse, it does in wasb (I wish they'd just used the
hostname the way the others do)
* S3 does accept user:pass to put your full credentials in, but you get
told off for doing this (it gets logged in too many places), and at some point
we'll turn it off. Users shouldn't be doing it.
If someone really does refer to a source JAR with a user1@hdfs:// and the
HDFS filesystem doesn't have a user --why not treat the FS as different and
don't worry about how the filesystem interprets it. It's a special case you
aren't normally going to see. And the moment you try to go fs.makeQualified()
between the two, you'll get a get a stack trace.
That is, this is not valid:
```
new Path("hdfs://[email protected]:8020").getFileSystem(conf).open(new
Path("hdfs://[email protected]:8020"))
```
You'll inevitably get a stack trace in makeQualified. (non normative
statement, try it and see)
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]