Github user tgravescs commented on the issue:

    https://github.com/apache/spark/pull/13738
  
    does this happen when you use the dataframe load method?  I'm guessing not 
because the datasource code does a hdfsPath.getFileSystem.
    
    for hadoopFile, textFile, newApiHadoopFile  since we have the path I think 
we could just add in a FileSystem.get(path, conf) on them and if its an hdfs 
path it would call HdfsConfiguration underneath.  The hard part is the 
hadoopRDD, newApiHadoopRDD since we don't have the path passed directly, its 
set via a conf. Need to look at that one more.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to